feat: 支持了星系、ngc对象、星点解析、绘制;

This commit is contained in:
grabbit 2025-06-28 10:50:22 +08:00
parent ea30baae3c
commit 9326e15366
32 changed files with 20207 additions and 294 deletions

1128
ARCHITECTURE.md Normal file

File diff suppressed because it is too large Load Diff

0
CLAUDE.md Normal file
View File

View File

@ -17,7 +17,7 @@ serialport = "4.2.0" # Serial port for GPS
embedded-hal = { version = "0.2.7", optional = true } # Hardware abstraction layer
# Video processing
opencv = { version = "0.94.2" } # OpenCV bindings
opencv = { version = "0.94.4" } # OpenCV bindings
image = "0.25.5" # Image processing
clang-sys = { version = "=1.4.0", features = ["runtime", "clang_14_0"] }
@ -32,6 +32,10 @@ toml = "0.8.20"
serde = { version = "1.0.160", features = ["derive"] } # Serialization
serde_json = "1.0.96" # JSON support
chrono = { version = "0.4.24", features = ["serde"] } # Date and time
chrono-tz = "0.9.0" # Timezone support
fitsio = "0.20.0"
lz4_flex = "0.11.3" # Fast LZ4 compression for frame storage
rusqlite = { version = "0.34.0", features = ["bundled"] } # SQLite
# Networking and communication

327
PRD.md Normal file
View File

@ -0,0 +1,327 @@
# 流星监控系统 - 产品需求文档 (PRD)
## 1. 产品概述
### 1.1 产品愿景
构建一个基于树莓派的智能流星监测系统,为天文爱好者、科研机构和天文台提供专业级的流星观测和数据收集解决方案。
### 1.2 产品定位
- **目标市场**: 天文爱好者、天文台、科研院所、流星观测网络
- **产品类别**: 嵌入式天文观测设备
- **技术定位**: 实时计算机视觉 + 精密天文定位
### 1.3 核心价值主张
- **24/7全天候监控**: 无人值守的连续流星监测
- **专业级精度**: GPS授时 + 星图解算提供精确时空定位
- **智能检测**: 基于机器视觉的自动流星识别与分类
- **网络化协作**: 支持多点联网观测和数据共享
## 2. 产品功能需求
### 2.1 核心功能模块
#### 2.1.1 视频采集系统
**功能描述**: 提供高质量的天空视频采集能力
- **实时视频流**: 支持星光级摄像头30fps高帧率采集
- **自适应参数**: 根据光照条件自动调整曝光、增益等参数
- **多输入模式**:
- 实时摄像头采集 (生产环境)
- 视频文件输入 (测试/回放模式)
- **质量控制**:
- 分辨率支持: 720p/1080p/4K
- 帧率可配置: 15-60fps
- 视频编码: H.264压缩存储
**验收标准**:
- 连续运行24小时无丢帧
- 在不同光照条件下自动参数调整时间 < 2秒
- 视频质量满足流星检测算法精度要求
#### 2.1.2 智能检测引擎
**功能描述**: 基于计算机视觉的实时流星检测
- **多算法支持**:
- 亮度变化检测 (快速检测)
- 背景差分算法 (精确检测)
- CAMS标准检测 (兼容性)
- 帧叠加降噪 (增强检测)
- **实时处理**: 检测延迟 < 100ms
- **智能过滤**:
- 飞机航迹过滤
- 云层干扰排除
- 虫子等近距离物体排除
- **可配置参数**:
- 检测灵敏度: 0.1-1.0
- 最小轨迹长度: 像素数
- 持续时间阈值: 0.1-5秒
**验收标准**:
- 检测准确率 > 90%
- 误报率 < 5%
- 检测延迟 < 100ms
- 支持并行多算法检测
#### 2.1.3 精准定位系统
**功能描述**: 提供精确的时间和空间定位
- **GPS时间同步**:
- 纳秒级时间精度 (PPS信号)
- NMEA协议支持
- 自动时区处理
- **地理位置定位**:
- 经纬度坐标 (精度 < 3米)
- 海拔高度信息
- 磁偏角补偿
- **星图解算**:
- astrometry.net集成
- 实时天区识别
- 星等标定
**验收标准**:
- GPS定位精度 < 3米
- 时间同步精度 < 1微秒
- 星图解算成功率 > 80%
- 解算时间 < 30秒
#### 2.1.4 数据管理系统
**功能描述**: 流星事件的完整数据管理
- **事件存储**:
- 视频片段 (事件前后10秒)
- 原始帧序列 (JPEG格式)
- 元数据 (JSON格式)
- **存储策略**:
- 环形缓存 (最近N小时数据)
- 自动清理 (基于存储空间和时间)
- 压缩存储 (节省磁盘空间)
- **数据格式**:
- 视频: MP4 (H.264编码)
- 图像: JPEG (可配置质量)
- 元数据: JSON (包含GPS、传感器、检测参数)
**验收标准**:
- 存储空间利用率 > 85%
- 数据检索时间 < 1秒
- 支持最大存储容量 1TB+
- 零数据丢失 (关键事件)
#### 2.1.5 远程监控系统
**功能描述**: 支持远程监控和控制
- **实时流传输**:
- RTSP协议支持
- 多客户端并发访问
- 可配置码率和分辨率
- **远程控制接口**:
- HTTP REST API
- MQTT协议支持
- 实时状态推送
- **Web管理界面**:
- 系统状态监控
- 参数配置管理
- 历史数据查看
- 事件统计分析
**验收标准**:
- RTSP流延迟 < 500ms
- API响应时间 < 100ms
- 支持至少5个并发客户端
- Web界面兼容主流浏览器
### 2.2 增强功能模块
#### 2.2.1 环境监测系统
**功能描述**: 监测观测环境条件
- **气象传感器**:
- 温度/湿度 (DHT22)
- 光照强度 (基于摄像头或光敏传感器)
- **系统监控**:
- CPU/内存使用率
- 存储空间状态
- 网络连接状态
- 设备温度监控
#### 2.2.2 视频增强系统
**功能描述**: 提供视频输出的增强功能
- **水印叠加**:
- 时间戳 (可配置格式)
- GPS坐标显示
- 环境数据 (温度、湿度)
- 设备方向信息
- **星图叠加**:
- 实时星图显示
- 星座线条绘制
- 主要天体标注
- 天区网格显示
#### 2.2.3 网络协作系统
**功能描述**: 支持多设备协作观测
- **数据同步**:
- 事件数据上传云端
- 多站点观测数据关联
- 轨道计算协作
- **标准协议支持**:
- CAMS FTP格式兼容
- IMO (国际流星组织) 数据格式
- 自定义API接口
## 3. 技术需求
### 3.1 硬件要求
#### 3.1.1 核心硬件
- **计算平台**: 树莓派 4B/5 (推荐 8GB RAM)
- **摄像头**: 星光级CMOS传感器 (IMX477/IMX462)
- **存储**: 高速microSD 64GB+ 或 USB3.0 SSD
- **网络**: 以太网/WiFi双网络支持
#### 3.1.2 可选硬件
- **GPS模块**: 支持PPS输出的GPS接收器
- **传感器**: DHT22温湿度传感器
- **电源**: 不间断电源(UPS)支持
### 3.2 软件环境
- **操作系统**: Raspberry Pi OS / Ubuntu 20.04+
- **运行时**: Rust 1.70+ / Tokio异步运行时
- **依赖库**: OpenCV 4.x, FFmpeg, astrometry.net
- **数据库**: SQLite 3 (本地) + 可选云数据库
### 3.3 性能指标
- **检测延迟**: < 100ms (实时检测)
- **系统响应**: < 500ms (用户界面)
- **连续运行**: > 30天 (无人值守)
- **数据处理**: > 1000 事件/日 (高活跃期)
## 4. 用户体验需求
### 4.1 安装部署
- **一键安装**: 提供自动化安装脚本
- **配置向导**: 图形化或命令行配置工具
- **硬件检测**: 自动检测和配置硬件设备
- **文档支持**: 详细的安装和配置文档
### 4.2 日常使用
- **零维护运行**: 自动故障恢复和系统优化
- **智能告警**: 关键事件和系统异常及时通知
- **简单配置**: 主要参数通过Web界面或配置文件调整
- **状态透明**: 实时显示系统运行状态和统计信息
### 4.3 数据访问
- **多种访问方式**: Web界面、API、直接文件访问
- **数据导出**: 支持多种标准格式导出
- **快速检索**: 基于时间、位置、事件类型的快速搜索
- **数据备份**: 自动或手动备份重要观测数据
## 5. 质量需求
### 5.1 可靠性
- **系统稳定性**: MTBF > 720小时 (30天)
- **数据完整性**: 关键事件数据零丢失
- **自动恢复**: 软件故障30秒内自动恢复
- **错误处理**: 优雅处理所有异常情况
### 5.2 性能
- **实时处理**: 检测处理能力 > 30fps
- **资源使用**: CPU使用率 < 80% (平均负载)
- **内存管理**: 内存使用 < 2GB (8GB系统)
- **存储效率**: 压缩比 > 50% (不影响检测精度)
### 5.3 可扩展性
- **模块化设计**: 支持功能模块独立升级
- **插件架构**: 支持第三方检测算法插件
- **多设备协作**: 支持10+ 设备网络部署
- **云端集成**: 支持多种云服务平台集成
### 5.4 安全性
- **网络安全**: 所有外部通信支持TLS加密
- **访问控制**: API访问需要身份认证和授权
- **数据保护**: 本地数据加密存储可选
- **更新安全**: 系统更新包数字签名验证
## 6. 合规性需求
### 6.1 开源协议
- **MIT许可证**: 代码采用MIT开源许可证
- **第三方库**: 兼容所有依赖库的许可证要求
- **文档开放**: 技术文档采用知识共享协议
### 6.2 标准兼容
- **CAMS格式**: 兼容CAMS网络数据格式
- **NMEA协议**: 完整支持GPS NMEA协议
- **视频标准**: 输出标准H.264/MP4格式
## 7. 项目约束
### 7.1 技术约束
- **硬件平台**: 主要针对ARM64架构优化
- **编程语言**: 使用Rust语言开发 (性能和安全考虑)
- **依赖管理**: 最小化外部依赖优先使用Rust生态
- **跨平台**: 支持Linux/macOS开发和测试
### 7.2 资源约束
- **开发团队**: 小团队开发,重视代码质量和文档
- **维护成本**: 设计简单易维护的架构
- **硬件成本**: 控制单套设备成本 < 500美元
### 7.3 时间约束
- **迭代开发**: 采用敏捷开发,每月发布新版本
- **功能优先级**: 核心功能优先,增强功能后续迭代
- **文档同步**: 功能开发与文档编写并行进行
## 8. 验收标准
### 8.1 功能验收
- [ ] 连续运行30天无重大故障
- [ ] 检测准确率达到90%以上
- [ ] 所有API接口100%可用
- [ ] Web界面所有功能正常
### 8.2 性能验收
- [ ] 检测延迟 < 100ms
- [ ] 系统资源使用率在合理范围
- [ ] 网络传输稳定可靠
- [ ] 存储管理自动化运行
### 8.3 用户体验验收
- [ ] 安装部署时间 < 30分钟
- [ ] 主要功能配置时间 < 10分钟
- [ ] 用户文档覆盖所有功能
- [ ] 常见问题有明确解决方案
## 9. 风险评估
### 9.1 技术风险
- **硬件兼容性**: 不同摄像头和传感器兼容性问题
- **算法精度**: 检测算法在不同环境下的鲁棒性
- **性能瓶颈**: 树莓派硬件性能限制
### 9.2 市场风险
- **用户接受度**: 天文爱好者对技术产品的接受程度
- **竞争产品**: 现有商业解决方案的竞争压力
- **维护支持**: 长期技术支持和社区建设
### 9.3 风险缓解
- **充分测试**: 多环境、多硬件配置测试
- **社区建设**: 建立用户社区和反馈机制
- **文档完善**: 提供详细的技术文档和教程
- **开源优势**: 利用开源社区力量进行改进
## 10. 成功指标
### 10.1 技术指标
- **检测精度**: > 90% 准确率
- **系统可用性**: > 99% 正常运行时间
- **处理性能**: 支持30fps实时检测
- **数据质量**: < 1% 数据丢失率
### 10.2 用户指标
- **用户满意度**: > 4.5/5.0 评分
- **社区活跃度**: > 100 活跃用户/月
- **问题解决**: < 24小时 响应时间
- **功能使用**: > 80% 核心功能使用率
### 10.3 项目指标
- **代码质量**: > 90% 代码覆盖率
- **文档完整**: 100% 功能文档覆盖
- **发布节奏**: 每月一个版本发布
- **社区贡献**: > 10 外部贡献者
---
*此PRD文档将随着项目进展持续更新和完善*

View File

@ -5,7 +5,9 @@
## 功能特点
- **实时视频捕获**:支持星光级摄像头,可动态调整参数
- **视频文件输入**:支持使用预先录制的视频文件进行测试和开发
- **自动流星检测**:使用计算机视觉算法分析视频帧
- **星图自动解算**使用astrometry.net实时计算当前视野中的星图
- **GPS时间与位置同步**:提供精确的事件元数据
- **本地环形缓存存储**:持续记录最近视频
- **检测事件云端上传**:包含元数据的事件片段上传
@ -20,7 +22,7 @@ meteor_detect/
│ ├── camera/ # 摄像头控制与视频采集
│ │ ├── controller.rs # 摄像头控制器
│ │ ├── frame_buffer.rs # 视频帧缓冲区
│ │ └── v4l2.rs # V4L2摄像头驱动
│ │ └── opencv.rs # OpenCV摄像头驱动
│ ├── gps/ # GPS和时间同步
│ │ ├── controller.rs # GPS控制器
│ │ └── nmea.rs # NMEA解析器
@ -28,13 +30,25 @@ meteor_detect/
│ │ ├── controller.rs # 传感器控制器
│ │ └── dht22.rs # 温湿度传感器驱动
│ ├── detection/ # 流星检测算法
│ ├── overlay/ # 视频叠加层
│ │ ├── watermark.rs # 水印叠加
│ │ └── star_chart.rs # 星图叠加
│ ├── storage/ # 数据存储管理
│ ├── communication/ # 通信与远程控制
│ ├── monitoring/ # 系统监控
│ ├── config.rs # 配置管理
│ └── main.rs # 应用程序入口
├── demos/ # 演示程序
│ ├── camera_demo.rs # 摄像头演示
│ ├── watermark_demo.rs # 水印叠加演示
│ └── file_input_demo.rs # 文件输入演示
├── docs/ # 文档
│ ├── design.md # 设计文档
│ ├── star_chart.md # 星图功能文档
│ └── file_input.md # 文件输入模式文档
├── Cargo.toml # 项目配置与依赖
├── config-example.toml # 配置文件示例
├── config-file-input.toml # 文件输入模式配置示例
├── build.sh # 构建脚本
└── README.md # 项目文档
```
@ -51,6 +65,7 @@ meteor_detect/
- **Rust 1.70+** (2021 edition)
- **OpenCV 4.x**
- **Astrometry.net** 星图解算工具
- **V4L2** 摄像头工具
- **SQLite 3**
@ -74,7 +89,8 @@ curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
sudo apt update
sudo apt install -y git curl build-essential pkg-config \
libssl-dev libv4l-dev v4l-utils \
libopencv-dev libsqlite3-dev
libopencv-dev libsqlite3-dev \
astrometry.net astrometry-data-tycho2
```
### 编译项目
@ -107,23 +123,73 @@ nano ~/.config/meteor_detect/config.toml
cargo run --release
```
### 运行演示程序
系统包含多个演示程序,用于测试特定功能:
```bash
# 运行摄像头演示
cargo run --bin camera_demo
# 运行水印叠加演示
cargo run --bin watermark_demo
# 运行文件输入演示(使用视频文件代替摄像头)
cargo run --bin file_input_demo
```
## 配置选项
配置文件位于`~/.config/meteor_detect/config.toml`,包含以下主要配置项:
- **摄像头参数**:分辨率、曝光、增益
- **文件输入设置**:启用文件输入模式、视频文件路径
- **星图解算**astrometry.net路径、更新频率、显示选项
- **GPS设置**端口、波特率、PPS配置
- **检测参数**:灵敏度、触发阈值
- **存储策略**:保留时间、压缩设置
- **通信选项**MQTT服务器、API配置
详细配置示例请参考`config-example.toml`文件。
详细配置示例请参考`config-example.toml``config-file-input.toml`文件。
## 文件输入模式
系统支持使用预先录制的视频文件代替实时摄像头,便于开发和测试:
```toml
[camera]
# 启用文件输入模式
file_input_mode = true
# 视频文件路径
input_file_path = "/path/to/your/starsky_video.mp4"
# 是否循环播放视频
loop_video = true
```
更多信息请参阅[文件输入模式文档](docs/file_input.md)。
## 星图解算功能
系统可以使用astrometry.net工具自动识别和解算当前画面中的星星显示星图叠加层
```toml
[star_chart]
# 启用星图叠加
enabled = true
# astrometry.net工具路径
solve_field_path = "/usr/local/bin/solve-field"
# 更新频率(秒)
update_frequency = 30.0
```
更多信息请参阅[星图功能文档](docs/star_chart.md)。
## 开发指南
- 使用`cargo test`运行测试
- 使用`./build.sh clean`清理构建文件
- 查看[设计文档](docs/design.md)了解系统架构
- 使用[文件输入模式](docs/file_input.md)进行开发和测试
## 许可证

View File

@ -0,0 +1,240 @@
# Display System Optimization Roadmap
## Phase 2: GStreamer Integration
### Overview
Implement GStreamer-based display backend for hardware-accelerated rendering, particularly optimized for embedded systems like Raspberry Pi.
### Goals
- **Performance**: 60-80% reduction in CPU usage
- **Hardware Acceleration**: Utilize GPU/VPU when available
- **Zero-Copy**: Minimize memory bandwidth usage
- **Flexibility**: Support multiple output formats (window, file, network stream)
### Implementation Plan
#### 1. Core GStreamer Backend (`src/display/gstreamer_backend.rs`)
```rust
pub struct GStreamerDisplay {
pipeline: gst::Pipeline,
appsrc: gst_app::AppSrc,
bus: gst::Bus,
config: DisplayConfig,
stats: DisplayStats,
}
impl DisplayBackend for GStreamerDisplay {
// Implement trait methods using GStreamer pipeline
}
```
**Key Features:**
- Dynamic pipeline creation based on output requirements
- Hardware encoder detection and utilization
- Efficient buffer management with memory pools
- Error handling and pipeline state management
#### 2. Pipeline Configurations
##### Basic Window Display
```
appsrc ! videoconvert ! videoscale ! xvimagesink
```
##### Hardware-Accelerated (Raspberry Pi)
```
appsrc ! v4l2convert ! video/x-raw,format=NV12 ! v4l2h264enc ! h264parse ! v4l2h264dec ! xvimagesink
```
##### Network Streaming
```
appsrc ! videoconvert ! x264enc ! rtph264pay ! udpsink host=192.168.1.100 port=5000
```
##### File Recording
```
appsrc ! videoconvert ! x264enc ! mp4mux ! filesink location=output.mp4
```
#### 3. Dependencies and Setup
**Cargo.toml additions:**
```toml
[dependencies]
gstreamer = "0.20"
gstreamer-app = "0.20"
gstreamer-video = "0.20"
[features]
gstreamer-display = ["gstreamer", "gstreamer-app", "gstreamer-video"]
```
**System Dependencies:**
```bash
# Ubuntu/Debian
sudo apt install libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev
# Raspberry Pi additional packages
sudo apt install gstreamer1.0-omx gstreamer1.0-plugins-bad
```
#### 4. Buffer Management Optimization
```rust
struct BufferPool {
pool: gst::BufferPool,
allocator: gst::Allocator,
}
impl BufferPool {
fn get_buffer(&self, mat: &opencv::Mat) -> Result<gst::Buffer> {
// Zero-copy conversion from OpenCV Mat to GStreamer Buffer
// Use memory mapping when possible
}
}
```
#### 5. Auto-Detection and Fallback
```rust
pub fn create_optimal_display(config: DisplayConfig) -> Box<dyn DisplayBackend> {
if gstreamer_available() && hardware_acceleration_available() {
Box::new(GStreamerDisplay::new(config))
} else {
Box::new(OpenCVDisplay::new(config))
}
}
```
### Hardware-Specific Optimizations
#### Raspberry Pi 4/5
- **GPU Memory Split**: `gpu_mem=128` in `/boot/config.txt`
- **V4L2 Codecs**: Utilize hardware H.264 encoder/decoder
- **DMA Buffers**: Use DMA-BUF for zero-copy operations
#### NVIDIA Jetson
- **NVENC/NVDEC**: Hardware encoding/decoding
- **CUDA Integration**: GPU-accelerated image processing
#### Intel Systems
- **VAAPI**: Video Acceleration API support
- **Quick Sync**: Hardware encoding acceleration
### Performance Targets
| Metric | Current (OpenCV) | Target (GStreamer) | Improvement |
|--------|------------------|-------------------|-------------|
| CPU Usage | 60-80% | 15-25% | 60-75% reduction |
| Memory Bandwidth | 2-3 GB/s | 0.5-1 GB/s | 50-75% reduction |
| Latency | 3-5 frames | 1-2 frames | 60% reduction |
| Power Consumption | High | Low | 30-50% reduction |
### Implementation Timeline
#### Week 1-2: Foundation
- [ ] Set up GStreamer Rust bindings
- [ ] Create basic pipeline structure
- [ ] Implement DisplayBackend trait
- [ ] Basic window display working
#### Week 3-4: Optimization
- [ ] Buffer pool implementation
- [ ] Zero-copy Mat to Buffer conversion
- [ ] Hardware acceleration detection
- [ ] Pipeline optimization
#### Week 5-6: Advanced Features
- [ ] Multiple output support (file, network)
- [ ] Error recovery and robustness
- [ ] Performance profiling and tuning
- [ ] Documentation and examples
#### Week 7-8: Integration and Testing
- [ ] Update demos to use GStreamer backend
- [ ] Raspberry Pi testing and optimization
- [ ] Benchmark comparison with OpenCV
- [ ] Production readiness assessment
### Compatibility Matrix
| Platform | GStreamer Support | Hardware Accel | Recommended Pipeline |
|----------|------------------|----------------|---------------------|
| Raspberry Pi 4/5 | ✅ | V4L2 | v4l2convert + omx |
| NVIDIA Jetson | ✅ | NVENC/NVDEC | nvvidconv + nvenc |
| Intel x86 | ✅ | VAAPI | vaapi + intel-media |
| Generic Linux | ✅ | Software | videoconvert + x264 |
| macOS | ✅ | VideoToolbox | vtenc + osxvideosink |
| Windows | ✅ | DirectShow | d3d11 + mf |
### Testing Strategy
#### Unit Tests
- Pipeline creation and teardown
- Buffer management
- Error handling
#### Integration Tests
- End-to-end display functionality
- Performance benchmarking
- Memory leak detection
#### Hardware Tests
- Raspberry Pi 4/5 validation
- NVIDIA Jetson compatibility
- Various resolution and framerate combinations
### Risk Mitigation
#### Potential Issues
1. **GStreamer Version Compatibility**: Different distributions have different GStreamer versions
2. **Hardware Driver Issues**: Proprietary drivers may not be available
3. **Memory Alignment**: OpenCV Mat and GStreamer Buffer memory layout differences
#### Solutions
1. **Runtime Detection**: Probe capabilities at startup
2. **Graceful Fallback**: Fall back to OpenCV backend if GStreamer fails
3. **Memory Copy Fallback**: Use memory copy if zero-copy fails
### Success Metrics
#### Must-Have
- [ ] 50%+ reduction in CPU usage on Raspberry Pi
- [ ] Maintain visual quality parity with OpenCV
- [ ] No memory leaks during extended operation
- [ ] Graceful error handling and recovery
#### Nice-to-Have
- [ ] Network streaming capability
- [ ] Recording to file functionality
- [ ] Multi-window support
- [ ] Real-time performance monitoring
### Documentation Requirements
- [ ] API documentation for new backend
- [ ] Hardware setup guides for different platforms
- [ ] Performance tuning recommendations
- [ ] Troubleshooting guide for common issues
- [ ] Migration guide from OpenCV backend
---
## Phase 3: Future Enhancements
### WebRTC Integration
- Real-time streaming to web browsers
- Low-latency remote monitoring
### Custom Shaders
- GPU-accelerated overlay rendering
- Advanced visual effects
### Multi-Display Support
- Simultaneous output to multiple displays
- Picture-in-picture functionality
---
*This roadmap will be updated as Phase 2 development progresses.*

110
config-file-input.toml Normal file
View File

@ -0,0 +1,110 @@
# Meteor Detection System Configuration with File Input Mode
# Unique identifier for this detector
device_id = "meteor-detector-01"
# Logging level (trace, debug, info, warn, error)
log_level = "info"
# Camera settings
[camera]
# Enable file input mode instead of using a camera
file_input_mode = true
# Path to video file
input_file_path = "/Users/grabbit/Downloads/20250103 象限儀座流星雨直播 [oELPd-LpAMw].webm"
# Loop the video when it reaches the end
loop_video = true
# Camera device (not used in file input mode, but kept for compatibility)
device = "/dev/video0"
# Resolution (options: HD1080p, HD720p, VGA)
resolution = "HD720p"
# Target frames per second (may be limited by the file's actual fps)
fps = 30
# Exposure mode (Auto or Manual exposure time in microseconds)
exposure = "Auto"
# Gain/ISO setting (0-255)
gain = 128
# Whether to lock focus at infinity
focus_locked = true
# GPS and time synchronization
[gps]
# Whether to enable GPS functionality
enable_gps = true
# Serial port for GPS module
port = "/dev/ttyAMA0"
# Baud rate
baud_rate = 9600
# Whether to use PPS signal for precise timing
use_pps = true
# GPIO pin for PPS signal (BCM numbering)
pps_pin = 18
# Allow system to run without GPS (using fallback position)
allow_degraded_mode = true
# Camera orientation
[gps.camera_orientation]
# Azimuth/heading in degrees (0 = North, 90 = East)
azimuth = 0
# Elevation/pitch in degrees (0 = horizontal, 90 = straight up)
elevation = 90
# Fallback GPS position (used when GPS is not available)
[gps.fallback_position]
# Latitude in degrees (positive is North, negative is South)
latitude = 34.2
# Longitude in degrees (positive is East, negative is West)
longitude = -118.2
# Altitude in meters above sea level
altitude = 85.0
# Star chart overlay settings
[star_chart]
# Whether the star chart overlay is enabled
enabled = true
# Path to the astrometry.net solve-field binary
solve_field_path = "/usr/local/bin/solve-field"
# Path to the astrometry.net index files
index_path = "/Users/grabbit/Project/astrometry/index-4100"
# Update frequency in seconds
update_frequency = 30
# Star marker color (B, G, R, A)
star_color = [0, 255, 0, 255] # Green
# Constellation line color (B, G, R, A)
constellation_color = [0, 180, 0, 180] # Semi-transparent green
# Star marker size
star_size = 3
# Constellation line thickness
line_thickness = 1
# Working directory for temporary files
working_dir = "/tmp/astrometry"
# Whether to show star names
show_star_names = true
# Size of index files to use (in arcminutes)
index_scale_range = [10, 60]
# Maximum time to wait for solve-field (seconds)
max_solve_time = 60
# Camera calibration for distortion correction (optional)
# If not provided, no distortion correction will be applied
# [star_chart.camera_calibration]
# # Camera intrinsic matrix (3x3) [fx, 0, cx; 0, fy, cy; 0, 0, 1]
# camera_matrix = [1000.0, 0.0, 640.0, 0.0, 1000.0, 360.0, 0.0, 0.0, 1.0]
# # Distortion coefficients [k1, k2, p1, p2, k3]
# distortion_coeffs = [-0.2, 0.1, 0.0, 0.0, 0.0]
# # Image dimensions for calibration
# image_width = 1280
# image_height = 720
# Storage settings
[storage]
# Directory for storing raw video data
raw_video_dir = "data/raw"
# Directory for storing event video clips
event_video_dir = "data/events"
# Maximum disk space to use for storage (in MB)
max_disk_usage_mb = 10000
# Number of days to keep event data
event_retention_days = 30
# Whether to compress video files
compress_video = true

BIN
demo_frame.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 96 KiB

37
demos/Cargo.toml Normal file
View File

@ -0,0 +1,37 @@
[package]
name = "meteor-detect-demos"
version = "0.1.0"
edition = "2021"
authors = ["Meteor Detection Team"]
description = "Demonstration programs for the Meteor Detection System"
[dependencies]
meteor_detect = { path = ".." }
tokio = { version = "1", features = ["full"] }
opencv = { version = "0.94.2" } # OpenCV bindings
anyhow = "1.0"
chrono = "0.4"
log = "0.4"
env_logger = "0.10"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
chrono-tz = "0.9.0" # Timezone support
fitsio = "0.20.0"
freetype-rs = "0.38.0"
[[bin]]
name = "camera_demo"
path = "camera_demo.rs"
[[bin]]
name = "watermark_demo"
path = "watermark_demo.rs"
[[bin]]
name = "file_input_demo"
path = "file_input_demo.rs"
[[bin]]
name = "star_chart_demo"
path = "star_chart_demo.rs"

13959
demos/NGC.csv Normal file

File diff suppressed because it is too large Load Diff

153
demos/camera_demo.rs Normal file
View File

@ -0,0 +1,153 @@
use std::io;
use std::path::PathBuf;
use std::str::FromStr;
use chrono::{Utc, Local, DateTime, TimeZone, FixedOffset};
use chrono::format::Fixed::TimezoneOffset;
use opencv::{core, imgcodecs, imgproc, highgui, prelude::*};
// Simplified camera module for demo
mod camera {
use opencv::{core, prelude::*, videoio};
use chrono::{DateTime, Utc};
pub struct Frame {
pub image: core::Mat,
pub timestamp: DateTime<Utc>,
pub index: u64,
}
impl Frame {
pub fn new(image: core::Mat, timestamp: DateTime<Utc>, index: u64) -> Self {
Self { image, timestamp, index }
}
}
pub struct CameraDemo {
capture: videoio::VideoCapture,
frame_count: u64,
}
impl CameraDemo {
pub fn new(device_id: i32) -> Result<Self, Box<dyn std::error::Error>> {
let capture = videoio::VideoCapture::new(device_id, videoio::CAP_ANY)?;
if !capture.is_opened()? {
return Err(format!("Failed to open camera device {}", device_id).into());
}
Ok(Self {
capture,
frame_count: 0,
})
}
pub fn from_file(path: &str) -> Result<Self, Box<dyn std::error::Error>> {
let capture = videoio::VideoCapture::from_file(path, videoio::CAP_ANY)?;
if !capture.is_opened()? {
return Err(format!("Failed to open video file {}", path).into());
}
Ok(Self {
capture,
frame_count: 0,
})
}
pub fn capture_frame(&mut self) -> Result<Frame, Box<dyn std::error::Error>> {
let mut frame = core::Mat::default();
if self.capture.read(&mut frame)? {
if frame.empty() {
return Err("Captured frame is empty".into());
}
let timestamp = Utc::now();
let index = self.frame_count;
self.frame_count += 1;
Ok(Frame::new(frame, timestamp, index))
} else {
Err("Failed to capture frame".into())
}
}
}
}
fn main() -> Result<(), Box<dyn std::error::Error>> {
println!("*** Camera Module Demo ***");
println!("This demo captures frames from a camera device or video file and displays them");
println!("Press 'q' to quit");
println!();
println!("Choose input source:");
println!("1. Camera device");
println!("2. Video file");
let mut choice = String::new();
io::stdin().read_line(&mut choice)?;
let mut camera_demo = match choice.trim() {
"1" => {
println!("Enter camera device ID (default is 0):");
let mut device_id = String::new();
io::stdin().read_line(&mut device_id)?;
let device_id = device_id.trim().parse::<i32>().unwrap_or(0);
println!("Opening camera device {}", device_id);
camera::CameraDemo::new(device_id)?
},
"2" => {
println!("Enter video file path:");
let mut file_path = String::new();
io::stdin().read_line(&mut file_path)?;
let file_path = file_path.trim();
println!("Opening video file: {}", file_path);
camera::CameraDemo::from_file(file_path)?
},
_ => {
println!("Invalid choice, using default camera (device 0)");
camera::CameraDemo::new(0)?
}
};
highgui::named_window("Camera Module Demo", highgui::WINDOW_NORMAL)?;
loop {
match camera_demo.capture_frame() {
Ok(frame) => {
// Add timestamp to the frame
let mut display_frame = frame.image.clone();
// Convert UTC timestamp to local time
let local_timestamp = frame.timestamp.with_timezone(&Local);
let timestamp_text = local_timestamp.format("%Y-%m-%d %H:%M:%S%.3f").to_string();
imgproc::put_text(
&mut display_frame,
&timestamp_text,
core::Point::new(10, 30),
imgproc::FONT_HERSHEY_SIMPLEX,
1.0,
core::Scalar::new(0.0, 255.0, 0.0, 0.0),
2,
imgproc::LINE_AA,
false,
)?;
// Display the frame
highgui::imshow("Camera Module Demo", &display_frame)?;
// Break the loop if 'q' is pressed
if highgui::wait_key(30)? == 'q' as i32 {
break;
}
},
Err(e) => {
println!("Error capturing frame: {}", e);
break;
}
}
}
highgui::destroy_all_windows()?;
Ok(())
}

114
demos/file_input_demo.rs Normal file
View File

@ -0,0 +1,114 @@
use std::path::PathBuf;
use std::io::{self, Write};
use chrono::{Local, Utc};
use anyhow::{Context, Result};
use opencv::{highgui, core, prelude::*};
use tokio;
use std::sync::{Arc, Mutex};
// 导入项目中的模块
use meteor_detect::camera::{CameraSettings, Resolution, ExposureMode, CameraController, Frame};
use meteor_detect::config::{Config, load_config};
use meteor_detect::overlay::star_chart::{StarChart, StarChartOptions};
use meteor_detect::gps::{GpsStatus, GeoPosition, CameraOrientation};
#[tokio::main]
async fn main() -> Result<()> {
println!("*** File Input Demo ***");
println!("This demo shows how to use a video file as input for the meteor detection system");
println!("It will apply the star chart overlay to the video frames");
println!("Press 'q' to quit");
println!();
// 读取视频文件路径
println!("Enter the path to a video file:");
let mut file_path = String::new();
io::stdin().read_line(&mut file_path)?;
let file_path = file_path.trim();
// 检查文件是否存在
if !std::path::Path::new(file_path).exists() {
println!("Error: File '{}' does not exist", file_path);
return Ok(());
}
// 创建配置
let mut config = load_config().unwrap();
// 设置为文件输入模式
config.camera.file_input_mode = true;
config.camera.input_file_path = file_path.to_string();
config.camera.loop_video = true;
// 确保星图配置已启用
config.star_chart.enabled = true;
// 初始化相机控制器
let mut camera_controller = CameraController::new(&config).await.unwrap();
println!("Initializing with video file: {}", file_path);
camera_controller.initialize().await.unwrap();
// 创建星图覆盖层
let gps_status = Arc::new(Mutex::new(GpsStatus::default()));
let mut star_chart = StarChart::new(
config.star_chart.clone(),
gps_status.clone()
).await.unwrap();
println!("Starting video playback...");
camera_controller.start_capture().await.unwrap();
// 获取帧订阅
let mut frame_rx = camera_controller.subscribe_to_frames();
// 创建窗口
highgui::named_window("File Input Demo", highgui::WINDOW_NORMAL)?;
highgui::resize_window("File Input Demo", 1280, 720)?;
// 处理帧循环
while let Ok(frame) = frame_rx.recv().await {
// 创建用于显示的帧
let mut display_frame = frame.mat.clone();
// 应用星图覆盖
if let Err(e) = star_chart.apply(&mut display_frame, frame.timestamp).await {
println!("Error applying star chart: {}", e);
}
// 在帧上显示时间戳和帧计数
let timestamp_text = format!("Frame: {} - Time: {}",
frame.index,
frame.timestamp.with_timezone(&Local).format("%Y-%m-%d %H:%M:%S%.3f")
);
opencv::imgproc::put_text(
&mut display_frame,
&timestamp_text,
core::Point::new(10, 30),
opencv::imgproc::FONT_HERSHEY_SIMPLEX,
1.0,
core::Scalar::new(0.0, 255.0, 0.0, 0.0),
2,
opencv::imgproc::LINE_AA,
false,
)?;
// 显示帧
highgui::imshow("File Input Demo", &display_frame)?;
// 检查按键
let key = highgui::wait_key(1)?;
if key == 'q' as i32 {
println!("Quitting...");
break;
}
}
// 清理
println!("Shutting down...");
camera_controller.stop_capture().await.unwrap();
highgui::destroy_all_windows()?;
println!("Demo completed successfully");
Ok(())
}

BIN
demos/meteor.mov Normal file

Binary file not shown.

508
demos/star_chart_demo.rs Normal file
View File

@ -0,0 +1,508 @@
use anyhow::{Context, Result};
use chrono::Utc;
use opencv::{core, highgui, imgcodecs, imgproc, prelude::*};
use std::fs;
use std::io::{self, Write};
use std::path::PathBuf;
use std::sync::{Arc, Mutex};
use std::time::{Duration, Instant};
use tokio;
// Import modules from the project
use meteor_detect::camera::{CameraController, CameraSettings, ExposureMode, Frame, Resolution};
use meteor_detect::config::{load_config, Config};
use meteor_detect::display::{DisplayBackend, DisplayConfig, OpenCVDisplay};
use meteor_detect::gps::{CameraOrientation, GeoPosition, GpsStatus};
use meteor_detect::overlay::star_chart::{StarChart, StarChartOptions};
use meteor_detect::utils::memory_monitor::{estimate_mat_size, get_system_memory_usage};
use env_logger;
/// Star Chart Demo Program
/// This demo shows how to use astrometry.net for star chart solving and overlay on video frames
#[tokio::main]
async fn main() -> Result<()> {
std::env::set_var("RUST_LOG", "debug");
env_logger::init();
println!("*** Star Chart Solving Demo ***");
println!("This demo shows how to use astrometry.net to solve star charts and overlay them on video");
println!("Press 'q' to exit");
println!("Press 's' to manually trigger star chart solving");
println!("Press '1' to toggle stars display");
println!("Press '2' to toggle constellations display");
println!("Press '3' to toggle NGC objects display");
println!("Press 'd' to toggle all star chart display");
println!("Press '+' to increase display scale, '-' to decrease");
println!("Press 'f' to toggle frame skipping");
println!("Press 'p' to show performance stats");
println!("Press 'm' to show detailed memory analysis");
println!();
let file_path = "meteor.mov";
// Check if the file exists
if !std::path::Path::new(file_path).exists() {
println!("Error: File '{}' does not exist", file_path);
return Ok(());
}
// Create configuration
let mut config = Config::default();
config.log_level = "debug".to_string(); // Change log level to debug for more detailed output
// Set file input mode
config.camera.file_input_mode = true;
config.camera.input_file_path = file_path.to_string();
config.camera.loop_video = true;
// Configure star chart solving parameters
let mut vec = Vec::new();
vec.push("/Users/grabbit/Project/astrometry/index-4100/index-4119.fits".to_string());
vec.push("/Users/grabbit/Project/astrometry/index-4100/index-4117.fits".to_string());
vec.push("/Users/grabbit/Project/astrometry/index-4100/index-4118.fits".to_string());
config.star_chart = StarChartOptions {
enabled: true, // Enable star chart for testing
solve_field_path: "/opt/homebrew/bin/solve-field".to_string(),
// index_path: "/Users/grabbit/Project/astrometry/index-4100".to_string(),
index_path: "".to_string(),
index_file: Some(vec),
update_frequency: 5.0, // Update every 60 seconds (slower for large images)
star_color: (0, 255, 0, 255), // Green
constellation_color: (0, 180, 0, 180), // Semi-transparent green
ngc_color: (255, 100, 0, 200), // Orange for NGC objects
star_size: 3,
line_thickness: 1,
ngc_size: 5,
working_dir: "/tmp/astrometry".to_string(),
show_stars: true,
show_star_names: true,
show_ngc_names: true,
show_constellations: true,
show_ngc_objects: true,
min_ngc_size: 5.0,
min_ngc_pixel_percent: 0.005, // 0.5% of image width - good balance for demo
index_scale_range: (70.0, 90.0), // Wider range in arcsec per pixel
max_solve_time: 60, // Wait max 60 seconds for large images
hint_timeout_minutes: 2,
camera_calibration: None,
};
// Ensure working directory exists
let working_dir = std::path::Path::new(&config.star_chart.working_dir);
if !working_dir.exists() {
fs::create_dir_all(working_dir)?;
}
// Initialize camera controller
println!("Initializing video file: {}", file_path);
let mut camera_controller = CameraController::new(&config).await.unwrap();
camera_controller.initialize().await.unwrap();
// Create simulated GPS data - set a typical observation location (e.g., Beijing Observatory)
let gps_status = Arc::new(Mutex::new(GpsStatus {
position: GeoPosition {
latitude: 40.0, // Beijing latitude ~40 degrees
longitude: 116.4, // Beijing longitude ~116.4 degrees
altitude: 50.0, // Altitude ~50 meters
},
satellites: 8,
timestamp: Utc::now(),
sync_status: "FullSync".to_string(),
time_accuracy_ms: 1.0,
camera_orientation: CameraOrientation {
azimuth: 0.0, // Facing north
elevation: 90.0, // Vertical upward
},
}));
// Create star chart overlay
println!("Initializing star chart solving component...");
let mut star_chart = StarChart::new(config.star_chart.clone(), gps_status.clone())
.await
.unwrap();
// Start camera capture
println!("Starting video playback...");
camera_controller.start_capture().await.unwrap();
// Get frame subscription
let mut frame_rx = camera_controller.subscribe_to_frames();
// Create optimized display system
let display_config = DisplayConfig {
display_scale: 0.5, // Start with 50% scale for better performance
frame_skip: 2, // Skip every other frame
async_rendering: false,
max_render_time_ms: 33, // ~30 FPS max
};
let mut display = OpenCVDisplay::new(display_config);
let window_name = "Star Chart Solving Demo";
display.create_window(window_name)?;
display.resize_window(window_name, 1280, 720)?;
// Status variables
let mut show_star_chart = true;
let mut show_stars = true;
let mut show_constellations = true;
let mut show_ngc_objects = true;
let mut frame_count = 0;
let mut last_manual_solve_time = Utc::now() - chrono::Duration::seconds(60); // Initially set to past time
let mut last_memory_check = Instant::now();
// Frame processing loop
println!("Entering frame processing loop");
while let Ok(frame) = frame_rx.recv().await {
// println!("Attempting to receive frame");
frame_count += 1;
// println!("Frame received: {}", frame_count);
// Use frame directly for display to avoid unnecessary clone
let mut display_frame = frame.mat;
// Apply star chart overlay
if show_star_chart {
// Update star chart options based on current toggles
let mut options = config.star_chart.clone();
options.show_stars = show_stars;
options.show_constellations = show_constellations;
options.show_ngc_objects = show_ngc_objects;
star_chart.set_options(options);
// println!("Applying star chart overlay for frame {}", frame_count);
if let Err(e) = star_chart.apply(&mut display_frame, frame.timestamp).await {
println!("Star chart application error: {}", e);
}
// println!("Finished applying star chart overlay for frame {}", frame_count);
}
// Add information overlay
add_info_overlay(&mut display_frame, frame_count, frame.timestamp, show_star_chart, show_stars, show_constellations, show_ngc_objects, &display)?;
// Display frame using optimized display system
if let Err(e) = display.show_frame(window_name, &display_frame) {
println!("Display error: {}", e);
}
// Periodic memory check (every 5 seconds)
if last_memory_check.elapsed() > Duration::from_secs(5) {
let stats = display.get_stats();
let frame_size = estimate_mat_size(&display_frame);
println!("=== Main Loop Memory Check ===");
println!("Frame count: {}", frame_count);
println!("Current display_frame size: {:.2} MB", frame_size as f64 / 1_000_000.0);
println!("Display_frame dimensions: {}x{}", display_frame.cols(), display_frame.rows());
println!("Display stats - Frames: {}/{} (displayed/total), Avg render: {:.2}ms",
stats.frames_displayed,
stats.frames_displayed + stats.frames_dropped,
stats.avg_render_time_ms);
// Check if display_frame is growing
if frame_size > 20_000_000 { // > 20MB
println!("⚠️ WARNING: Display frame unusually large: {:.2}MB", frame_size as f64 / 1_000_000.0);
}
// System memory check
if let Some((used, free)) = get_system_memory_usage() {
let total = used + free;
let used_gb = used as f64 / 1_000_000_000.0;
let used_percent = (used as f64 / total as f64) * 100.0;
println!("System memory: {:.2}GB used, {:.2}GB free ({:.1}% used)",
used_gb,
free as f64 / 1_000_000_000.0,
used_percent);
if used > 8_000_000_000 { // > 8GB
println!("⚠️ WARNING: High memory usage detected!");
}
// Track memory growth
static mut LAST_MEMORY_USAGE: u64 = 0;
unsafe {
if LAST_MEMORY_USAGE > 0 {
let growth = (used as i64) - (LAST_MEMORY_USAGE as i64);
if growth > 100_000_000 { // > 100MB growth
println!("🚨 MEMORY GROWTH DETECTED: +{:.2}MB since last check",
growth as f64 / 1_000_000.0);
}
}
LAST_MEMORY_USAGE = used;
}
}
last_memory_check = Instant::now();
}
// Check keys (non-blocking)
let key = display.poll_key(1)?;
match key as u8 as char {
'q' => {
println!("Exiting...");
break;
}
's' => {
// Manually trigger star chart solving
let now = Utc::now();
let elapsed = now.signed_duration_since(last_manual_solve_time);
if elapsed.num_seconds() >= 10 {
// At least 10 seconds interval
println!("Manually triggering star chart solving...");
// We don't directly call star chart solving here, as it's handled in the background_thread
// But we can "trick" the system into doing a new solve immediately by updating the last solve time
last_manual_solve_time = now;
// You can also save the current frame for analysis
let save_path = format!("/tmp/astrometry/manual_solve_{}.jpg", now.timestamp());
imgcodecs::imwrite(&save_path, &display_frame, &core::Vector::new())?;
println!("Current frame saved to: {}", save_path);
} else {
println!("Please wait at least 10 seconds before manually triggering solving");
}
}
'1' => {
// Toggle stars display
show_stars = !show_stars;
println!(
"Stars display: {}",
if show_stars { "ON" } else { "OFF" }
);
}
'2' => {
// Toggle constellations display
show_constellations = !show_constellations;
println!(
"Constellations display: {}",
if show_constellations { "ON" } else { "OFF" }
);
}
'3' => {
// Toggle NGC objects display
show_ngc_objects = !show_ngc_objects;
println!(
"NGC objects display: {}",
if show_ngc_objects { "ON" } else { "OFF" }
);
}
'd' => {
// Toggle all star chart display
show_star_chart = !show_star_chart;
println!(
"Star chart display: {}",
if show_star_chart { "ON" } else { "OFF" }
);
}
'+' | '=' => {
// Increase display scale
let mut config = *display.config(); // Copy instead of clone
config.display_scale = (config.display_scale + 0.1).min(1.0);
display.set_config(config);
println!("Display scale: {:.1}x", config.display_scale);
}
'-' => {
// Decrease display scale
let mut config = *display.config(); // Copy instead of clone
config.display_scale = (config.display_scale - 0.1).max(0.1);
display.set_config(config);
println!("Display scale: {:.1}x", config.display_scale);
}
'f' => {
// Toggle frame skipping
let mut config = *display.config(); // Copy instead of clone
config.frame_skip = if config.frame_skip == 1 { 2 } else { 1 };
display.set_config(config);
println!("Frame skip: every {} frame(s)", config.frame_skip);
}
'p' => {
// Show performance stats
let stats = display.get_stats();
println!("=== Performance Stats ===");
println!("Backend: {}", stats.backend_name);
println!("Frames displayed: {}", stats.frames_displayed);
println!("Frames dropped: {}", stats.frames_dropped);
println!("Avg render time: {:.2}ms", stats.avg_render_time_ms);
let config = display.config();
println!("Display config: scale={:.1}x, skip={}",
config.display_scale, config.frame_skip);
}
'm' => {
// Show detailed memory analysis
println!("=== Detailed Memory Analysis ===");
let frame_size = estimate_mat_size(&display_frame);
println!("Current display frame: {:.2} MB", frame_size as f64 / 1_000_000.0);
if let Some((used, free)) = get_system_memory_usage() {
let total = used + free;
println!("System Memory:");
println!(" Total: {:.2} GB", total as f64 / 1_000_000_000.0);
println!(" Used: {:.2} GB ({:.1}%)",
used as f64 / 1_000_000_000.0,
(used as f64 / total as f64) * 100.0);
println!(" Free: {:.2} GB ({:.1}%)",
free as f64 / 1_000_000_000.0,
(free as f64 / total as f64) * 100.0);
}
// Force garbage collection (Rust doesn't have manual GC, but we can hint)
println!("Suggesting garbage collection...");
std::hint::black_box(vec![0u8; 1]); // Small allocation to trigger GC heuristics
}
_ => {}
}
// println!("Finished processing frame {}", frame_count);
}
println!("Exiting frame processing loop");
// Cleanup
println!("Shutting down...");
camera_controller.stop_capture().await.unwrap();
star_chart.shutdown().await.unwrap();
display.destroy_window(window_name)?;
// Show final performance stats
let final_stats = display.get_stats();
println!("=== Final Performance Stats ===");
println!("Total frames displayed: {}", final_stats.frames_displayed);
println!("Total frames dropped: {}", final_stats.frames_dropped);
println!("Final avg render time: {:.2}ms", final_stats.avg_render_time_ms);
println!("Demo successfully completed");
Ok(())
}
/// Add information overlay to video frame
fn add_info_overlay(
frame: &mut core::Mat,
frame_count: u32,
timestamp: chrono::DateTime<chrono::Utc>,
star_chart_enabled: bool,
show_stars: bool,
show_constellations: bool,
show_ngc_objects: bool,
display: &OpenCVDisplay,
) -> Result<()> {
// Create overlay area parameters
let overlay_height = 120; // Increased height for more information
let width = frame.cols();
// Ensure height doesn't exceed frame height
let overlay_height = std::cmp::min(overlay_height, frame.rows());
// Create dark semi-transparent background for top area
let bg_rect = core::Rect::new(0, 0, width, overlay_height);
let bg_color = core::Scalar::new(0.0, 0.0, 0.0, 0.0);
// Create a temporary copy frame to store original area
let original_roi = frame.roi(bg_rect)?.t().unwrap().to_mat().unwrap().clone();
// Draw semi-transparent rectangle on original frame
imgproc::rectangle(
frame,
bg_rect,
bg_color,
-1, // Fill
imgproc::LINE_8,
0
)?;
// Add text directly on frame
let white = core::Scalar::new(255.0, 255.0, 255.0, 0.0);
let green = core::Scalar::new(0.0, 255.0, 0.0, 0.0);
// Frame count and timestamp
let time_text = format!(
"Frame: {} | Time: {}",
frame_count,
timestamp.format("%Y-%m-%d %H:%M:%S%.3f")
);
imgproc::put_text(
frame,
&time_text,
core::Point::new(10, 20),
imgproc::FONT_HERSHEY_SIMPLEX,
0.5,
white,
1,
imgproc::LINE_AA,
false,
)?;
// Star chart status
let star_chart_text = format!(
"Star Chart: {} | Stars: {} | Constellations: {} | NGC: {}",
if star_chart_enabled { "ON" } else { "OFF" },
if show_stars { "ON" } else { "OFF" },
if show_constellations { "ON" } else { "OFF" },
if show_ngc_objects { "ON" } else { "OFF" }
);
imgproc::put_text(
frame,
&star_chart_text,
core::Point::new(10, 40),
imgproc::FONT_HERSHEY_SIMPLEX,
0.5,
green,
1,
imgproc::LINE_AA,
false,
)?;
// Controls information
let controls_text = "Controls: 'S'=Solve | 'D'=Toggle All | '1'=Stars | '2'=Constellations | '3'=NGC | 'Q'=Exit";
imgproc::put_text(
frame,
controls_text,
core::Point::new(10, 60),
imgproc::FONT_HERSHEY_SIMPLEX,
0.5,
white,
1,
imgproc::LINE_AA,
false,
)?;
// File info and description
let help_text = "This demo shows real-time star chart solving and overlay functionality";
imgproc::put_text(
frame,
help_text,
core::Point::new(10, 80),
imgproc::FONT_HERSHEY_SIMPLEX,
0.5,
white,
1,
imgproc::LINE_AA,
false,
)?;
// Performance and optimization info
let stats = display.get_stats();
let config = display.config();
let perf_text = format!(
"Performance: {:.1}ms/frame | Scale: {:.1}x | Skip: {} | Frames: {}/{}",
stats.avg_render_time_ms,
config.display_scale,
config.frame_skip,
stats.frames_displayed,
stats.frames_displayed + stats.frames_dropped
);
imgproc::put_text(
frame,
&perf_text,
core::Point::new(10, 100),
imgproc::FONT_HERSHEY_SIMPLEX,
0.4,
green,
1,
imgproc::LINE_AA,
false,
)?;
// Create a new frame to store final result (blend of original area and new content)
let mut roi_dest = frame.roi_mut(bg_rect)?;
// Blend original area and overlay
// core::add_weighted(&original_roi, 0.3, &roi_dest.clone_pointee(), 0.7, 0.0, &mut roi_dest, -1)?;
Ok(())
}

445
demos/watermark_demo.rs Normal file
View File

@ -0,0 +1,445 @@
use std::io;
use std::path::PathBuf;
use chrono::Utc;
use opencv::{core, imgcodecs, imgproc, highgui, prelude::*};
// Simplified watermark overlay module for demo
mod overlay {
use anyhow::Result;
use chrono::{DateTime, Utc};
use opencv::{core, imgproc, prelude::*};
use serde::{Deserialize, Serialize};
use freetype::{Library, Face};
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
pub enum WatermarkPosition {
TopLeft,
TopRight,
BottomLeft,
BottomRight,
Custom(u32, u32),
}
impl Default for WatermarkPosition {
fn default() -> Self {
Self::BottomLeft
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct WatermarkOptions {
pub enabled: bool,
pub position: WatermarkPosition,
pub font_scale: f64,
pub thickness: i32,
pub color: (u8, u8, u8, u8),
pub background: bool,
pub background_color: (u8, u8, u8, u8),
pub padding: i32,
}
impl Default for WatermarkOptions {
fn default() -> Self {
Self {
enabled: true,
position: WatermarkPosition::BottomLeft,
font_scale: 0.6,
thickness: 1,
color: (255, 255, 255, 255), // White
background: true,
background_color: (0, 0, 0, 128), // Semi-transparent black
padding: 8,
}
}
}
pub struct WatermarkDemo {
options: WatermarkOptions,
gps_position: (f64, f64, f64), // lat, lon, alt
temperature: f32,
humidity: f32,
freetype_lib: Library,
font_face: Face, // Use a static lifetime for simplicity in demo
}
impl WatermarkDemo {
pub fn new(options: WatermarkOptions) -> Result<Self> {
let freetype_lib = Library::init()?;
// Dynamically set font_path based on OS using conditional compilation
let font_path = if cfg!(target_os = "windows") {
"C:\\Windows\\Fonts\\arial.ttf" // Common Windows font
} else if cfg!(target_os = "linux") {
"/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf" // Common Linux font
} else if cfg!(target_os = "macos") {
"/System/Library/Fonts/Supplemental/Arial.ttf" // Common macOS font
} else {
// Fallback or error for other OS
eprintln!("Warning: Could not determine a suitable font path for the current OS. Using a default that might not exist.");
"/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf" // Default fallback
};
let font_face = freetype_lib.new_face(font_path, 0)?;
Ok(Self {
options,
gps_position: (34.0522, -118.2437, 85.0), // Default to Los Angeles
temperature: 25.0,
humidity: 60.0,
freetype_lib,
font_face,
})
}
pub fn set_gps(&mut self, lat: f64, lon: f64, alt: f64) {
self.gps_position = (lat, lon, alt);
}
pub fn set_weather(&mut self, temp: f32, humidity: f32) {
self.temperature = temp;
self.humidity = humidity;
}
pub fn apply(&mut self, frame: &mut core::Mat, timestamp: DateTime<Utc>) -> Result<()> {
if !self.options.enabled {
return Ok(());
}
// Build text lines
let mut lines = Vec::new();
// Timestamp
lines.push(timestamp.format("%Y-%m-%d %H:%M:%S%.3f").to_string());
// GPS coordinates
let (lat, lon, alt) = self.gps_position;
lines.push(format!("Lat: {:.6}° Lon: {:.6}° Alt: {:.1}m", lat, lon, alt));
// Environment data
lines.push(format!(
"Temp: {:.1}°C Humidity: {:.1}%",
self.temperature, self.humidity
));
// Skip if no content
if lines.is_empty() {
return Ok(());
}
// Get frame dimensions
let width = frame.cols();
let height = frame.rows();
// Set font size based on options.font_scale (adjust as needed)
let font_size = (self.options.font_scale * 30.0) as isize; // Example scaling
self.font_face.set_pixel_sizes(0, font_size as u32)?;
let padding = self.options.padding;
let line_spacing = 4; // Space between lines
// Calculate text size and total block size using FreeType
let mut line_heights = Vec::with_capacity(lines.len());
let mut max_width = 0;
let mut total_height = 0;
for line in &lines {
let mut line_width = 0;
let mut line_height = 0;
for char in line.chars() {
self.font_face.load_char(char as usize, freetype::face::LoadFlag::RENDER)?;
let glyph = self.font_face.glyph();
let metrics = glyph.metrics();
line_width += (metrics.horiAdvance / 64) as i32;
line_height = line_height.max((metrics.height / 64) as i32);
}
line_heights.push(line_height);
max_width = max_width.max(line_width);
total_height += line_height;
}
let text_block_height = total_height + line_spacing * (lines.len() as i32 - 1) + padding * 2;
let text_block_width = max_width + padding * 2;
// Calculate watermark position
let (x, y) = match self.options.position {
WatermarkPosition::TopLeft => (padding, padding),
WatermarkPosition::TopRight => (width - text_block_width - padding, padding),
WatermarkPosition::BottomLeft => (padding, height - text_block_height - padding),
WatermarkPosition::BottomRight => (
width - text_block_width - padding,
height - text_block_height - padding
),
WatermarkPosition::Custom(x, y) => (x as i32, y as i32),
};
// Draw background rectangle if enabled
if self.options.background {
let bg_color = core::Scalar::new(
self.options.background_color.0 as f64,
self.options.background_color.1 as f64,
self.options.background_color.2 as f64,
self.options.background_color.3 as f64,
);
let rect = core::Rect::new(
x,
y,
text_block_width,
text_block_height
);
imgproc::rectangle(
frame,
rect,
bg_color,
-1, // Fill
imgproc::LINE_8,
0,
)?;
}
// Draw text lines using FreeType
let text_color = core::Scalar::new(
self.options.color.0 as f64,
self.options.color.1 as f64,
self.options.color.2 as f64,
self.options.color.3 as f64,
);
let mut current_y = y + padding;
for (i, line) in lines.iter().enumerate() {
let mut current_x = x + padding;
for char in line.chars() {
self.font_face.load_char(char as usize, freetype::face::LoadFlag::RENDER)?;
let glyph = self.font_face.glyph();
let bitmap = glyph.bitmap();
let metrics = glyph.metrics();
let bitmap_width = bitmap.width() as usize;
let bitmap_rows = bitmap.rows() as usize;
let bitmap_buffer = bitmap.buffer();
// Calculate text position for this character
let char_x = current_x + (metrics.horiBearingX / 64) as i32;
let char_y = current_y + (line_heights[i] as f64 * 0.8) as i32 - (metrics.horiBearingY / 64) as i32; // Adjust baseline
// Draw the glyph bitmap onto the frame
for row in 0..bitmap_rows {
for col in 0..bitmap_width {
let bitmap_pixel = bitmap_buffer[row * bitmap.pitch() as usize + col];
if bitmap_pixel > 0 {
let frame_x = char_x + col as i32;
let frame_y = char_y + row as i32;
// Ensure pixel is within frame bounds
if frame_x >= 0 && frame_x < width && frame_y >= 0 && frame_y < height {
let mut pixel = frame.at_2d_mut::<core::Vec3b>(frame_y, frame_x)?;
pixel[0] = text_color[0] as u8;
pixel[1] = text_color[1] as u8;
pixel[2] = text_color[2] as u8;
}
}
}
}
current_x += (metrics.horiAdvance / 64) as i32;
}
current_y += line_heights[i] + line_spacing;
}
Ok(())
}
}
}
// Simplified camera module for demo
mod camera {
use opencv::{core, prelude::*, videoio};
use chrono::{DateTime, Utc};
pub struct Frame {
pub image: core::Mat,
pub timestamp: DateTime<Utc>,
pub index: u64,
}
impl Frame {
pub fn new(image: core::Mat, timestamp: DateTime<Utc>, index: u64) -> Self {
Self { image, timestamp, index }
}
}
pub struct CameraDemo {
capture: videoio::VideoCapture,
frame_count: u64,
}
impl CameraDemo {
pub fn new(device_id: i32) -> Result<Self, Box<dyn std::error::Error>> {
let capture = videoio::VideoCapture::new(device_id, videoio::CAP_ANY)?;
if !capture.is_opened()? {
return Err(format!("Failed to open camera device {}", device_id).into());
}
Ok(Self {
capture,
frame_count: 0,
})
}
pub fn from_file(path: &str) -> Result<Self, Box<dyn std::error::Error>> {
let capture = videoio::VideoCapture::from_file(path, videoio::CAP_ANY)?;
if !capture.is_opened()? {
return Err(format!("Failed to open video file {}", path).into());
}
Ok(Self {
capture,
frame_count: 0,
})
}
pub fn capture_frame(&mut self) -> Result<Frame, Box<dyn std::error::Error>> {
let mut frame = core::Mat::default();
if self.capture.read(&mut frame)? {
if frame.empty() {
return Err("Captured frame is empty".into());
}
let timestamp = Utc::now();
let index = self.frame_count;
self.frame_count += 1;
Ok(Frame::new(frame, timestamp, index))
} else {
Err("Failed to capture frame".into())
}
}
}
}
fn main() -> Result<(), Box<dyn std::error::Error>> {
println!("*** Watermark Overlay Demo ***");
println!("This demo adds a watermark overlay to frames from a camera or video file");
println!("Press 'q' to quit");
println!("Press 'p' to change watermark position");
println!("Press 'c' to change GPS coordinates");
println!("Press 't' to change temperature and humidity");
println!();
println!("Choose input source:");
println!("1. Camera device");
println!("2. Video file");
let mut choice = String::new();
io::stdin().read_line(&mut choice)?;
let mut camera_demo = match choice.trim() {
"1" => {
println!("Enter camera device ID (default is 0):");
let mut device_id = String::new();
io::stdin().read_line(&mut device_id)?;
let device_id = device_id.trim().parse::<i32>().unwrap_or(0);
println!("Opening camera device {}", device_id);
camera::CameraDemo::new(device_id)?
},
"2" => {
println!("Enter video file path:");
let mut file_path = String::new();
io::stdin().read_line(&mut file_path)?;
let file_path = file_path.trim();
println!("Opening video file: {}", file_path);
camera::CameraDemo::from_file(file_path)?
},
_ => {
println!("Invalid choice, using default camera (device 0)");
camera::CameraDemo::new(0)?
}
};
// Create watermark demo
let mut watermark_options = overlay::WatermarkOptions::default();
let mut watermark_demo = overlay::WatermarkDemo::new(watermark_options.clone())?;
highgui::named_window("Watermark Overlay Demo", highgui::WINDOW_NORMAL)?;
loop {
match camera_demo.capture_frame() {
Ok(frame) => {
// Apply watermark to the frame
let mut display_frame = frame.image.clone();
if let Err(e) = watermark_demo.apply(&mut display_frame, frame.timestamp) {
println!("Error applying watermark: {}", e);
}
// Display the frame
highgui::imshow("Watermark Overlay Demo", &display_frame)?;
// Handle key presses
let key = highgui::wait_key(30)?;
match key as u8 as char {
'q' => break, // Quit
'p' => {
// Cycle through positions
watermark_options.position = match watermark_options.position {
overlay::WatermarkPosition::TopLeft => overlay::WatermarkPosition::TopRight,
overlay::WatermarkPosition::TopRight => overlay::WatermarkPosition::BottomRight,
overlay::WatermarkPosition::BottomRight => overlay::WatermarkPosition::BottomLeft,
overlay::WatermarkPosition::BottomLeft => overlay::WatermarkPosition::TopLeft,
_ => overlay::WatermarkPosition::TopLeft,
};
watermark_demo = overlay::WatermarkDemo::new(watermark_options.clone())?;
println!("Changed watermark position: {:?}", watermark_options.position);
},
'c' => {
// Change to some predefined GPS locations
static LOCATIONS: [(f64, f64, f64, &str); 4] = [
(34.0522, -118.2437, 85.0, "Los Angeles"),
(40.7128, -74.0060, 10.0, "New York"),
(51.5074, -0.1278, 20.0, "London"),
(35.6762, 139.6503, 40.0, "Tokyo"),
];
static mut LOCATION_INDEX: usize = 0;
unsafe {
LOCATION_INDEX = (LOCATION_INDEX + 1) % LOCATIONS.len();
let (lat, lon, alt, name) = LOCATIONS[LOCATION_INDEX];
watermark_demo.set_gps(lat, lon, alt);
println!("Changed location to {}: Lat={}, Lon={}, Alt={}m", name, lat, lon, alt);
}
},
't' => {
// Cycle through different weather conditions
static CONDITIONS: [(f32, f32, &str); 4] = [
(25.0, 60.0, "Warm and Humid"),
(32.0, 80.0, "Hot and Very Humid"),
(15.0, 40.0, "Cool and Dry"),
(5.0, 30.0, "Cold and Dry"),
];
static mut CONDITION_INDEX: usize = 0;
unsafe {
CONDITION_INDEX = (CONDITION_INDEX + 1) % CONDITIONS.len();
let (temp, humidity, desc) = CONDITIONS[CONDITION_INDEX];
watermark_demo.set_weather(temp, humidity);
println!("Changed weather to {}: Temp={} deg C, Humidity={}%", desc, temp, humidity);
}
},
_ => {}
}
},
Err(e) => {
println!("Error capturing frame: {}", e);
break;
}
}
}
highgui::destroy_all_windows()?;
Ok(())
}

114
docs/file_input.md Normal file
View File

@ -0,0 +1,114 @@
# 文件输入模式使用指南
本文档介绍如何使用流星监测系统的文件输入模式,允许您使用预先录制的视频文件代替实时摄像头进行系统测试和开发。
## 配置文件输入模式
### 方法 1使用配置文件
1. 创建或修改配置文件(例如 `config.toml`),启用文件输入模式:
```toml
[camera]
# 启用文件输入模式
file_input_mode = true
# 视频文件路径
input_file_path = "/path/to/your/starsky_video.mp4"
# 是否循环播放视频
loop_video = true
```
2. 使用此配置文件运行系统:
```bash
./meteor_detect --config /path/to/config.toml
```
### 方法 2使用演示程序
项目包含了一个专用的文件输入演示程序,可以直接测试星图叠加功能:
```bash
cargo run --bin file_input_demo
```
程序会提示您输入视频文件路径,然后显示带有星图叠加的视频。
## 推荐的视频素材来源
1. **公共领域星空延时摄影**
- NASA 图像和视频库
- ESA 图像和视频库
- 国家天文台提供的公共材料
2. **创作共享许可星空视频**
- Pexels (https://www.pexels.com/search/videos/night%20sky/)
- Pixabay (https://pixabay.com/videos/search/night%20sky/)
- Videvo (https://www.videvo.net/stock-video-footage/night-sky/)
3. **天文学家资源**
- 很多天文学家会分享用于教育目的的延时摄影
- 天文台公开的望远镜视频素材
## 视频文件要求
为了获得最佳效果,建议使用符合以下条件的视频文件:
1. **分辨率**建议使用720p以上分辨率的视频最好是1080p
2. **格式**支持常见视频格式如MP4、AVI、MOV等OpenCV支持的格式
3. **帧率**理想的帧率为每秒30帧左右
4. **曝光**:足够长的曝光时间,以便清晰捕捉星星
5. **内容**:包含相对稳定的星空视图,最好是长时间的延时摄影
6. **光污染**:尽量选择低光污染环境下拍摄的视频
## 文件输入模式的限制
使用文件输入模式时,请注意以下限制:
1. **GPS数据**系统将使用配置文件中的固定GPS位置而不是实时位置
2. **时间戳**:时间戳将基于系统当前时间,而不是原始拍摄时间
3. **视频循环**:启用循环选项后,视频将在结束时自动从头开始播放
## 示例:测试星图解算
以下是使用文件输入模式测试星图解算功能的步骤:
1. 准备一段包含清晰星空的视频文件
2. 修改配置,启用文件输入模式和星图覆盖功能:
```toml
[camera]
file_input_mode = true
input_file_path = "/path/to/starsky_video.mp4"
loop_video = true
[star_chart]
enabled = true
solve_field_path = "/usr/local/bin/solve-field"
index_path = "/usr/local/share/astrometry"
update_frequency = 30.0
```
3. 运行系统并观察星图解算和叠加过程
4. 根据需要调整星图设置以获得最佳效果
## 故障排除
如果在使用文件输入模式时遇到问题,请检查以下几点:
1. **视频文件不存在或无法访问**:确认文件路径正确并且有读取权限
2. **视频格式不受支持**尝试将视频转换为MP4格式
3. **资源占用过高**:对于高分辨率视频,可能需要降低分辨率或帧率
4. **星图解算失败**确保视频中有足够的星星并且astrometry.net工具已正确安装
## 开发者注意事项
当使用文件输入模式进行开发时:
1. 对于重现问题的场景测试,可以使用特定的视频片段
2. 为了测试不同条件下的性能,准备多种不同类型的星空视频
3. 在测试特定功能时,可以使用包含已知星星位置的标定视频
---
通过文件输入模式,您可以不依赖于实时的天气条件或特殊的摄影设备,方便地开发和测试系统的各项功能。

View File

@ -138,6 +138,9 @@ fn create_demo_config() -> Config {
exposure: ExposureMode::Auto,
gain: 128,
focus_locked: true,
file_input_mode: false,
input_file_path: String::new(),
loop_video: true,
};
Config {

View File

@ -7,7 +7,7 @@ use std::time::Duration;
use tokio::sync::broadcast;
use tokio::time;
use crate::camera::frame_buffer::{Frame, FrameBuffer, SharedFrameBuffer};
use crate::camera::frame_buffer::{Frame, FrameBuffer, SharedFrameBuffer, LZ4FrameBuffer, SharedLZ4FrameBuffer};
use crate::camera::opencv::{OpenCVCamera, OpenCVCaptureStream};
use crate::camera::{CameraSettings, ExposureMode, MeteorEvent, Resolution};
@ -21,6 +21,8 @@ pub struct CameraController {
stream: Option<OpenCVCaptureStream>,
/// Circular buffer for storing recent frames
frame_buffer: SharedFrameBuffer,
// /// LZ4 compressed buffer for memory efficiency
// lz4_buffer: Option<SharedLZ4FrameBuffer>,
/// Frame counter
frame_count: u64,
/// Whether the camera is currently running
@ -37,12 +39,46 @@ impl CameraController {
// Extract camera settings from config (placeholder for now)
let settings = config.clone().camera;
// Create frame buffer with capacity for 10 minutes of video at settings.fps
let buffer_capacity = (10 * 60 * settings.fps) as usize;
// 🚨 MAJOR MEMORY ISSUE: 10 minutes of video buffer!
// For 30fps video, this creates 9000 frames × 16.84MB = ~150GB memory usage!
// URGENT: Reduce buffer size for demos and testing
let buffer_capacity = if settings.file_input_mode {
// For file input mode (demos), use much smaller buffer
150 // Only 1 second worth at 30fps = ~500MB instead of 150GB
} else {
// For live camera, keep some buffer but much smaller
(5 * settings.fps) as usize // 1 minute instead of 10
};
// warn!("🔍 FRAME BUFFER: Creating buffer with capacity {} frames", buffer_capacity);
// warn!("🔍 ESTIMATED MEMORY: ~{:.2}GB for frame buffer",
// buffer_capacity as f64 * 16.84 / 1000.0);
let frame_buffer = Arc::new(FrameBuffer::new(buffer_capacity));
// Create broadcast channel for frames
let (frame_tx, _) = broadcast::channel(30);
// // Create LZ4 buffer for ultra-fast compression with better memory efficiency
// let lz4_buffer = if settings.file_input_mode {
// // For demos, use LZ4 buffer with larger capacity but much lower memory usage
// let lz4_capacity = 300; // Back to 300 frames, but LZ4 compressed
// let recent_frames = 10; // Keep 10 most recent frames uncompressed for ultra-fast access
//
// warn!("⚡ CREATING LZ4 BUFFER: {} total capacity, {} recent uncompressed",
// lz4_capacity, recent_frames);
// warn!("⚡ ESTIMATED MEMORY: ~{:.2}MB (vs {:.2}MB uncompressed) - ULTRA-FAST!",
// (recent_frames as f64 * 16.84) + (lz4_capacity - recent_frames) as f64 * 4.0, // Assume 4MB per LZ4 compressed frame
// lz4_capacity as f64 * 16.84);
//
// Some(Arc::new(LZ4FrameBuffer::new(lz4_capacity, recent_frames)))
// } else {
// None
// };
// 🚨 BROADCAST CHANNEL: Also consuming massive memory
let channel_capacity = if settings.file_input_mode { 30 } else { 10 }; // Reduced from 300 to 30
warn!("🔍 BROADCAST CHANNEL: Creating channel with capacity {} frames (~{:.2}MB)",
channel_capacity, channel_capacity as f64 * 16.84);
let (frame_tx, _) = broadcast::channel(channel_capacity);
// Create events directory if it doesn't exist
let events_dir = PathBuf::from("events");
@ -53,6 +89,7 @@ impl CameraController {
camera: None,
stream: None,
frame_buffer,
// lz4_buffer,
frame_count: 0,
is_running: false,
frame_tx,
@ -62,35 +99,59 @@ impl CameraController {
/// Initialize the camera with current settings
pub async fn initialize(&mut self) -> Result<()> {
// Open the camera
let mut camera =
OpenCVCamera::open(&self.settings.device).context("Failed to open camera")?;
// Check if we're in file input mode
if self.settings.file_input_mode {
// Make sure we have a file path
if self.settings.input_file_path.is_empty() {
return Err(anyhow::anyhow!("File input mode enabled but no input file path provided"));
}
// Configure camera parameters
camera
.set_format(self.settings.resolution)
.context("Failed to set camera format")?;
info!("Initializing in file input mode: {}", self.settings.input_file_path);
camera
.set_fps(self.settings.fps)
.context("Failed to set camera FPS")?;
// Open the video file
let mut camera = OpenCVCamera::open_file(&self.settings.input_file_path)
.context("Failed to open video file")?;
camera
.set_exposure(self.settings.exposure)
.context("Failed to set camera exposure")?;
// Set FPS if specified
if self.settings.fps > 0 {
camera.set_fps(self.settings.fps)
.context("Failed to set video FPS")?;
}
camera
.set_gain(self.settings.gain)
.context("Failed to set camera gain")?;
self.camera = Some(camera);
info!("Video file initialized successfully");
} else {
// Normal camera mode
// Open the camera
let mut camera =
OpenCVCamera::open(&self.settings.device).context("Failed to open camera")?;
if self.settings.focus_locked {
// Configure camera parameters
camera
.lock_focus_at_infinity()
.context("Failed to lock focus at infinity")?;
}
.set_format(self.settings.resolution)
.context("Failed to set camera format")?;
self.camera = Some(camera);
info!("Camera initialized successfully");
camera
.set_fps(self.settings.fps)
.context("Failed to set camera FPS")?;
camera
.set_exposure(self.settings.exposure)
.context("Failed to set camera exposure")?;
camera
.set_gain(self.settings.gain)
.context("Failed to set camera gain")?;
if self.settings.focus_locked {
camera
.lock_focus_at_infinity()
.context("Failed to lock focus at infinity")?;
}
self.camera = Some(camera);
info!("Camera initialized successfully");
}
Ok(())
}
@ -108,15 +169,22 @@ impl CameraController {
.ok_or_else(|| anyhow::anyhow!("Camera not initialized"))?;
// Start the camera streaming
let stream = camera
.start_streaming()
.context("Failed to start camera streaming")?;
let stream = if self.settings.file_input_mode {
camera
.start_streaming_with_loop(self.settings.loop_video)
.context("Failed to start video file streaming")?
} else {
camera
.start_streaming()
.context("Failed to start camera streaming")?
};
self.stream = Some(stream);
self.is_running = true;
// Clone necessary values for the capture task
let frame_buffer = self.frame_buffer.clone();
// let lz4_buffer = self.lz4_buffer.clone();
let frame_tx = self.frame_tx.clone();
let fps = self.settings.fps;
let mut stream = self
@ -152,9 +220,18 @@ impl CameraController {
if let Err(e) = frame_buffer.push(frame.clone()) {
error!("Failed to add frame to buffer: {}", e);
}
//
// // Also add to LZ4 buffer if available (for demos) - ultra-fast compression!
// if let Some(ref lz4_buf) = lz4_buffer {
// if let Err(e) = lz4_buf.push(frame.clone()) {
// error!("Failed to add frame to LZ4 buffer: {}", e);
// }
// }
// Broadcast frame to listeners
let _ = frame_tx.send(frame);
if let Err(e) = frame_tx.send(frame) {
warn!("Failed to broadcast frame {}: {} (channel may be full or have no subscribers)", frame_count, e);
}
}
Err(e) => {
error!("Failed to capture frame: {}", e);
@ -166,6 +243,7 @@ impl CameraController {
// Try to recreate the stream - this is a simplified version
// In a real implementation, you might want to signal the main thread
// to fully reinitialize the camera
// todo? 为啥失败太多打开 0
match OpenCVCamera::open("0") { // Note: simplified, should use original device
Ok(mut camera) => {
// Configure minimal settings
@ -226,6 +304,11 @@ impl CameraController {
pub fn get_frame_buffer(&self) -> SharedFrameBuffer {
self.frame_buffer.clone()
}
//
// /// Get a clone of the LZ4 compressed buffer (if available)
// pub fn get_lz4_buffer(&self) -> Option<SharedLZ4FrameBuffer> {
// self.lz4_buffer.clone()
// }
/// Check if the camera is currently running
pub fn is_running(&self) -> bool {
@ -466,6 +549,7 @@ impl Clone for CameraController {
camera: None, // 不克隆相机实例,因为它包含底层资源
stream: None, // 不克隆流,因为它也包含底层资源
frame_buffer: self.frame_buffer.clone(),
// lz4_buffer: self.lz4_buffer.clone(),
frame_count: self.frame_count,
is_running: self.is_running,
frame_tx, // 使用新创建的广播通道

View File

@ -4,6 +4,8 @@ use opencv::{core, imgcodecs, prelude::*};
use std::collections::VecDeque;
use std::path::Path;
use std::sync::{Arc, Mutex};
use log::{info, warn};
use crate::utils::memory_monitor::estimate_mat_size;
/// A single video frame with timestamp and metadata
#[derive(Clone)]
@ -16,6 +18,21 @@ pub struct Frame {
pub index: u64,
}
/// LZ4 compressed frame for ultra-fast memory-efficient storage
#[derive(Clone)]
pub struct CompressedFrame {
/// LZ4 compressed image data
compressed_data: Vec<u8>,
/// Original image dimensions (width, height)
dimensions: (i32, i32),
/// Original image type (for reconstruction)
mat_type: i32,
/// Timestamp when the frame was captured
pub timestamp: DateTime<Utc>,
/// Frame index in the capture sequence
pub index: u64,
}
impl Frame {
/// Create a new frame
pub fn new(mat: core::Mat, timestamp: DateTime<Utc>, index: u64) -> Self {
@ -35,6 +52,86 @@ impl Frame {
)?;
Ok(())
}
/// Convert to LZ4 compressed frame - extremely fast compression
pub fn to_lz4_compressed(&self) -> Result<CompressedFrame> {
let original_size = estimate_mat_size(&self.mat);
// Get Mat properties
let rows = self.mat.rows();
let cols = self.mat.cols();
let mat_type = self.mat.typ();
// Extract raw data from Mat using safe methods
let data_slice = unsafe {
let data_ptr = self.mat.ptr(0)?;
let total_elements = self.mat.total();
let elem_size = self.mat.elem_size()?;
let data_size = total_elements * elem_size;
std::slice::from_raw_parts(data_ptr, data_size)
};
// LZ4 compress the raw data - extremely fast!
let compressed_data = lz4_flex::compress_prepend_size(data_slice);
let compressed_size = compressed_data.len();
let compression_ratio = original_size as f64 / compressed_size as f64;
// info!("⚡ LZ4 COMPRESSION: {:.2}MB -> {:.2}MB (ratio: {:.1}:1, ultra-fast!)",
// original_size as f64 / 1_000_000.0,
// compressed_size as f64 / 1_000_000.0,
// compression_ratio);
Ok(CompressedFrame {
compressed_data,
dimensions: (cols, rows),
mat_type,
timestamp: self.timestamp,
index: self.index,
})
}
}
impl CompressedFrame {
/// Get the compressed data size in bytes
pub fn compressed_size(&self) -> usize {
self.compressed_data.len()
}
/// Decompress back to a full Frame - extremely fast decompression
pub fn to_frame(&self) -> Result<Frame> {
// LZ4 decompress - ultra fast!
let decompressed_data = lz4_flex::decompress_size_prepended(&self.compressed_data)
.map_err(|e| anyhow::anyhow!("LZ4 decompression failed: {}", e))?;
// Reconstruct Mat from raw data
let (width, height) = self.dimensions;
// Create new Mat and copy the decompressed data
let mut mat = unsafe { core::Mat::new_rows_cols(height, width, self.mat_type)? };
// Copy decompressed data to Mat
unsafe {
let mat_data_ptr = mat.ptr_mut(0)?;
let mat_data_size = mat.total() * mat.elem_size()?;
std::ptr::copy_nonoverlapping(
decompressed_data.as_ptr(),
mat_data_ptr,
std::cmp::min(decompressed_data.len(), mat_data_size)
);
}
Ok(Frame {
mat,
timestamp: self.timestamp,
index: self.index,
})
}
/// Get original image dimensions
pub fn dimensions(&self) -> (i32, i32) {
self.dimensions
}
}
/// A circular buffer for storing video frames
@ -58,18 +155,40 @@ impl FrameBuffer {
pub fn push(&self, frame: Frame) -> Result<()> {
let mut buffer = self.buffer.lock().unwrap();
let frame_size = estimate_mat_size(&frame.mat);
// If buffer is at capacity, remove the oldest frame
if buffer.len() >= self.capacity {
buffer.pop_front();
let was_full = buffer.len() >= self.capacity;
if was_full {
if let Some(old_frame) = buffer.pop_front() {
let old_size = estimate_mat_size(&old_frame.mat);
// Note: old frame should be automatically dropped here, releasing memory
}
}
// Add the new frame
buffer.push_back(frame);
// Calculate total memory usage
let total_frames = buffer.len();
let estimated_total_memory = total_frames as f64 * frame_size as f64;
// if total_frames % 30 == 0 { // Log every 30 frames to avoid spam
// info!("📦 FRAME BUFFER: {}/{} frames, ~{:.2}MB total memory",
// total_frames, self.capacity, estimated_total_memory / 1_000_000.0);
// }
// // Warn about high memory usage
// if estimated_total_memory > 1_000_000_000.0 { // > 1GB
// warn!("⚠️ FRAME BUFFER HIGH MEMORY: {:.2}GB in buffer!",
// estimated_total_memory / 1_000_000_000.0);
// }
Ok(())
}
/// Get a specific frame by index (most recent = 0, older frames have higher indices)
/// 🚨 WARNING: This method clones the entire frame data! Use sparingly.
pub fn get(&self, index: usize) -> Option<Frame> {
let buffer = self.buffer.lock().unwrap();
@ -79,10 +198,18 @@ impl FrameBuffer {
// Convert from reverse index (newest = 0) to actual index
let actual_index = buffer.len() - 1 - index;
buffer.get(actual_index).cloned()
if let Some(frame) = buffer.get(actual_index) {
let frame_size = estimate_mat_size(&frame.mat);
warn!("🚨 MEMORY: Cloning frame in get() - {:.2}MB will be duplicated!",
frame_size as f64 / 1_000_000.0);
Some(frame.clone())
} else {
None
}
}
/// Get all frames within a specific time range
/// 🚨 WARNING: This method clones ALL matching frames! Can use massive memory.
pub fn get_frames_in_range(
&self,
start_time: DateTime<Utc>,
@ -90,13 +217,22 @@ impl FrameBuffer {
) -> Vec<Frame> {
let buffer = self.buffer.lock().unwrap();
buffer
let matching_frames: Vec<Frame> = buffer
.iter()
.filter(|frame| {
frame.timestamp >= start_time && frame.timestamp <= end_time
})
.cloned()
.collect()
.collect();
let total_frames = matching_frames.len();
if total_frames > 0 {
let estimated_memory = total_frames as f64 * 16.84; // MB per frame
warn!("🚨 MEMORY: get_frames_in_range() cloning {} frames (~{:.2}MB)!",
total_frames, estimated_memory);
}
matching_frames
}
/// Get the number of frames currently in the buffer
@ -136,5 +272,110 @@ impl FrameBuffer {
}
}
/// LZ4 compressed frame buffer for ultra-fast, memory-efficient storage
pub struct LZ4FrameBuffer {
/// The actual buffer containing compressed frames
buffer: Mutex<VecDeque<CompressedFrame>>,
/// Maximum capacity of the buffer
capacity: usize,
/// Keep the last N frames uncompressed for ultra-fast access
recent_frames: Mutex<VecDeque<Frame>>,
/// Number of recent frames to keep uncompressed
recent_count: usize,
}
impl LZ4FrameBuffer {
/// Create a new LZ4 compressed frame buffer
pub fn new(capacity: usize, recent_count: usize) -> Self {
Self {
buffer: Mutex::new(VecDeque::with_capacity(capacity)),
capacity,
recent_frames: Mutex::new(VecDeque::with_capacity(recent_count)),
recent_count,
}
}
/// Add a frame to the buffer with ultra-fast LZ4 compression
pub fn push(&self, frame: Frame) -> Result<()> {
let frame_size = estimate_mat_size(&frame.mat);
// Add to recent frames first
{
let mut recent = self.recent_frames.lock().unwrap();
// If recent buffer is full, compress the oldest recent frame with LZ4
if recent.len() >= self.recent_count {
if let Some(old_frame) = recent.pop_front() {
let compressed = old_frame.to_lz4_compressed()?;
// Add to compressed buffer
let mut buffer = self.buffer.lock().unwrap();
if buffer.len() >= self.capacity {
buffer.pop_front(); // Remove oldest compressed frame
}
buffer.push_back(compressed);
}
}
recent.push_back(frame);
}
Ok(())
}
/// Get a frame by index (0 = most recent) with ultra-fast decompression
pub fn get(&self, index: usize) -> Option<Frame> {
// First check recent frames (no decompression needed)
{
let recent = self.recent_frames.lock().unwrap();
if index < recent.len() {
let actual_index = recent.len() - 1 - index;
if let Some(frame) = recent.get(actual_index) {
return Some(frame.clone());
}
}
}
// Then check compressed frames (ultra-fast LZ4 decompression)
let compressed_index = index - self.recent_frames.lock().unwrap().len();
{
let buffer = self.buffer.lock().unwrap();
if compressed_index < buffer.len() {
let actual_index = buffer.len() - 1 - compressed_index;
if let Some(compressed_frame) = buffer.get(actual_index) {
info!("⚡ LZ4 DECOMPRESSING: Frame {} (ultra-fast!)", index);
return compressed_frame.to_frame().ok();
}
}
}
None
}
/// Get total number of frames (recent + compressed)
pub fn len(&self) -> usize {
self.recent_frames.lock().unwrap().len() + self.buffer.lock().unwrap().len()
}
/// Check if the buffer is empty
pub fn is_empty(&self) -> bool {
self.len() == 0
}
/// Get the capacity of the buffer
pub fn capacity(&self) -> usize {
self.capacity
}
/// Clear all frames from the buffer
pub fn clear(&self) {
self.recent_frames.lock().unwrap().clear();
self.buffer.lock().unwrap().clear();
}
}
/// Thread-safe frame buffer that can be shared across threads
pub type SharedFrameBuffer = Arc<FrameBuffer>;
/// Thread-safe LZ4 compressed frame buffer that can be shared across threads
pub type SharedLZ4FrameBuffer = Arc<LZ4FrameBuffer>;

View File

@ -4,7 +4,7 @@ mod opencv;
mod frame_buffer;
pub use controller::CameraController;
pub use frame_buffer::{Frame, FrameBuffer};
pub use frame_buffer::{Frame, FrameBuffer, CompressedFrame, LZ4FrameBuffer, SharedLZ4FrameBuffer};
pub use opencv::OpenCVCamera;
use anyhow::Result;
@ -66,6 +66,20 @@ pub struct CameraSettings {
pub gain: u8,
/// Whether to lock focus at infinity
pub focus_locked: bool,
/// Whether to use file input mode instead of camera
#[serde(default)]
pub file_input_mode: bool,
/// Input file path (when file_input_mode is true)
#[serde(default)]
pub input_file_path: String,
/// Whether to loop the video when it reaches the end
#[serde(default = "default_loop_video")]
pub loop_video: bool,
}
/// Helper function for default loop_video value
fn default_loop_video() -> bool {
true
}
impl Default for CameraSettings {
@ -87,6 +101,9 @@ impl Default for CameraSettings {
exposure: ExposureMode::Auto,
gain: 128, // Mid-range default
focus_locked: true,
file_input_mode: false,
input_file_path: String::new(),
loop_video: true,
}
}
}

View File

@ -51,6 +51,37 @@ impl OpenCVCamera {
})
}
/// Open a video file for input
pub fn open_file<P: AsRef<Path>>(path: P) -> Result<Self> {
let path_str = path.as_ref().to_str()
.ok_or_else(|| anyhow!("Invalid path"))?;
let mut capture = videoio::VideoCapture::from_file(path_str, videoio::CAP_ANY)?;
if !capture.is_opened()? {
return Err(anyhow!("Failed to open video file: {}", path_str));
}
// Get video properties
let width = capture.get(videoio::CAP_PROP_FRAME_WIDTH)? as u32;
let height = capture.get(videoio::CAP_PROP_FRAME_HEIGHT)? as u32;
let fps = capture.get(videoio::CAP_PROP_FPS)? as u32;
let total_frames = capture.get(videoio::CAP_PROP_FRAME_COUNT)? as u32;
info!(
"Opened video file: {} ({}x{} @ {} fps, {} frames)",
path_str, width, height, fps, total_frames
);
Ok(Self {
capture: Arc::new(Mutex::new(capture)),
width,
height,
is_streaming: false,
device: path_str.to_string(),
})
}
/// Create a VideoCapture instance from a path or device index
fn create_capture_from_path(path_str: &str) -> Result<videoio::VideoCapture> {
// Try to parse as integer index first
@ -218,9 +249,28 @@ impl OpenCVCamera {
info!("Started camera streaming");
// 使用同一个相机实例来避免重复打开设备
Ok(OpenCVCaptureStream {
capture: self.capture.clone(),
})
Ok(OpenCVCaptureStream::new(self.capture.clone(), false))
}
/// Start streaming from the camera with loop option for video files
pub fn start_streaming_with_loop(&mut self, loop_video: bool) -> Result<OpenCVCaptureStream> {
// 获取互斥锁守卫检查相机是否打开
{
let capture_guard = self.capture.lock().unwrap();
if !capture_guard.is_opened()? {
return Err(anyhow!("Camera is not open"));
}
}
self.is_streaming = true;
if loop_video {
info!("Started streaming with loop enabled");
} else {
info!("Started streaming");
}
// 使用同一个相机实例来避免重复打开设备
Ok(OpenCVCaptureStream::new(self.capture.clone(), loop_video))
}
/// Stop streaming from the camera
@ -249,10 +299,19 @@ impl OpenCVCamera {
/// Wrapper around OpenCV VideoCapture for streaming
pub struct OpenCVCaptureStream {
capture: Arc<Mutex<videoio::VideoCapture>>,
loop_video: bool,
}
impl OpenCVCaptureStream {
/// Capture a single frame from the camera
/// Create a new capture stream
pub fn new(capture: Arc<Mutex<videoio::VideoCapture>>, loop_video: bool) -> Self {
Self {
capture,
loop_video,
}
}
/// Capture a single frame from the camera or video file
pub fn capture_frame(&mut self) -> Result<core::Mat> {
let mut frame = core::Mat::default();
@ -261,7 +320,22 @@ impl OpenCVCaptureStream {
if capture_guard.read(&mut frame)? {
if frame.empty() {
return Err(anyhow!("Captured frame is empty"));
// 如果视频结束并且需要循环播放
if self.loop_video {
debug!("Video file ended, looping back to start");
// 重置到视频开始位置
capture_guard.set(videoio::CAP_PROP_POS_FRAMES, 0.0)?;
// 再次读取帧
if capture_guard.read(&mut frame)? {
if frame.empty() {
return Err(anyhow!("Frame is still empty after looping"));
}
} else {
return Err(anyhow!("Failed to read frame after looping"));
}
} else {
return Err(anyhow!("End of video reached"));
}
}
Ok(frame)
} else {

View File

@ -306,6 +306,9 @@ impl Default for Config {
exposure: ExposureMode::Auto,
gain: 128,
focus_locked: true,
file_input_mode: false,
input_file_path: String::new(),
loop_video: true,
},
gps: GpsConfig::default(),
sensors: SensorConfig::default(),

74
src/display/mod.rs Normal file
View File

@ -0,0 +1,74 @@
use anyhow::Result;
use opencv::{core, prelude::*};
/// Display backend trait for frame rendering
/// This abstraction allows switching between different display implementations
/// while maintaining the same interface
pub trait DisplayBackend {
/// Display a frame in the window
fn show_frame(&mut self, window_name: &str, frame: &core::Mat) -> Result<()>;
/// Check for key presses (non-blocking)
fn poll_key(&self, timeout_ms: i32) -> Result<i32>;
/// Resize window
fn resize_window(&self, window_name: &str, width: i32, height: i32) -> Result<()>;
/// Create/initialize window
fn create_window(&self, window_name: &str) -> Result<()>;
/// Cleanup resources
fn destroy_window(&self, window_name: &str) -> Result<()>;
/// Get performance statistics
fn get_stats(&self) -> DisplayStats;
}
/// Performance statistics for display backend
#[derive(Debug, Clone)]
pub struct DisplayStats {
pub frames_displayed: u64,
pub frames_dropped: u64,
pub avg_render_time_ms: f64,
pub backend_name: String,
}
impl Default for DisplayStats {
fn default() -> Self {
Self {
frames_displayed: 0,
frames_dropped: 0,
avg_render_time_ms: 0.0,
backend_name: "unknown".to_string(),
}
}
}
/// Configuration for display optimization
#[derive(Debug, Clone, Copy)]
pub struct DisplayConfig {
/// Scale factor for display (0.0-1.0, where 1.0 = original size)
pub display_scale: f32,
/// Skip every N frames (1 = show all, 2 = show every other frame)
pub frame_skip: u32,
/// Enable async rendering
pub async_rendering: bool,
/// Maximum render time before dropping frame (ms)
pub max_render_time_ms: u64,
}
impl Default for DisplayConfig {
fn default() -> Self {
Self {
display_scale: 1.0, // Original size
frame_skip: 1, // Show all frames
async_rendering: false,
max_render_time_ms: 33, // ~30 FPS
}
}
}
pub mod opencv_backend;
// pub mod gstreamer_backend; // Future implementation
pub use opencv_backend::OpenCVDisplay;

View File

@ -0,0 +1,300 @@
use super::{DisplayBackend, DisplayConfig, DisplayStats};
use crate::utils::memory_monitor::{MemoryMonitor, estimate_mat_size, get_system_memory_usage};
use anyhow::Result;
use opencv::{core, highgui, imgproc, prelude::*};
use std::collections::HashMap;
use std::time::{Duration, Instant};
use log::{info, warn};
/// OpenCV-based display backend with performance optimizations
pub struct OpenCVDisplay {
config: DisplayConfig,
stats: DisplayStats,
frame_counter: u64,
last_render_times: Vec<f64>, // Rolling average for render time
created_windows: HashMap<String, bool>,
last_display_time: Instant,
// Memory optimization: reuse scaled frame buffer
scaled_frame_buffer: Option<core::Mat>,
// Memory monitoring
memory_monitor: MemoryMonitor,
last_memory_report: Instant,
}
impl OpenCVDisplay {
pub fn new(config: DisplayConfig) -> Self {
Self {
config,
stats: DisplayStats {
backend_name: "OpenCV (Optimized)".to_string(),
..Default::default()
},
frame_counter: 0,
last_render_times: Vec::with_capacity(30), // Track last 30 frames
created_windows: HashMap::new(),
last_display_time: Instant::now(),
scaled_frame_buffer: None,
memory_monitor: MemoryMonitor::new(),
last_memory_report: Instant::now(),
}
}
/// Get current configuration
pub fn config(&self) -> &DisplayConfig {
&self.config
}
/// Update configuration at runtime
pub fn set_config(&mut self, config: DisplayConfig) {
// If scale changed significantly, clear buffer to avoid size mismatches
if (config.display_scale - self.config.display_scale).abs() > 0.1 {
self.scaled_frame_buffer = None;
}
self.config = config;
}
/// Scale frame using reusable buffer (only called when scaling is needed)
fn optimize_frame(&mut self, frame: &core::Mat) -> Result<core::Mat> {
let new_width = (frame.cols() as f32 * self.config.display_scale) as i32;
let new_height = (frame.rows() as f32 * self.config.display_scale) as i32;
// Initialize or reuse existing buffer (only allocate once)
let buffer_allocated = self.scaled_frame_buffer.is_none();
if buffer_allocated {
self.scaled_frame_buffer = Some(core::Mat::default());
// Only record allocation once when buffer is created
let buffer_size = estimate_mat_size(self.scaled_frame_buffer.as_ref().unwrap());
if buffer_size > 0 {
self.memory_monitor.record_allocation("scaled_frame_buffer", buffer_size);
}
}
let scaled_frame = self.scaled_frame_buffer.as_mut().unwrap();
imgproc::resize(
frame,
scaled_frame,
core::Size::new(new_width, new_height),
0.0,
0.0,
imgproc::INTER_LINEAR, // Fast linear interpolation
)?;
let cloned_frame = scaled_frame.clone();
// Don't track every clone as a "new allocation" - this is the main problem!
Ok(cloned_frame)
}
/// Scale and display frame without additional cloning (zero-copy approach)
fn scale_and_display(&mut self, frame: &core::Mat, window_name: &str) -> Result<()> {
let new_width = (frame.cols() as f32 * self.config.display_scale) as i32;
let new_height = (frame.rows() as f32 * self.config.display_scale) as i32;
// Initialize or reuse existing buffer
if self.scaled_frame_buffer.is_none() {
self.scaled_frame_buffer = Some(core::Mat::default());
}
let scaled_frame = self.scaled_frame_buffer.as_mut().unwrap();
imgproc::resize(
frame,
scaled_frame,
core::Size::new(new_width, new_height),
0.0,
0.0,
imgproc::INTER_LINEAR,
)?;
// Display directly from buffer (no clone!)
highgui::imshow(window_name, scaled_frame)?;
Ok(())
}
/// Update performance statistics
fn update_stats(&mut self, render_time: Duration, frame_displayed: bool) {
let render_time_ms = render_time.as_secs_f64() * 1000.0;
if frame_displayed {
self.stats.frames_displayed += 1;
// Update rolling average for render time (with fixed capacity)
if self.last_render_times.len() >= 30 {
self.last_render_times.remove(0); // Remove oldest first
}
self.last_render_times.push(render_time_ms);
self.stats.avg_render_time_ms =
self.last_render_times.iter().sum::<f64>() / self.last_render_times.len() as f64;
} else {
self.stats.frames_dropped += 1;
}
}
/// Check if frame should be displayed based on frame skip configuration
fn should_display_frame(&self) -> bool {
(self.frame_counter % self.config.frame_skip as u64) == 0
}
/// Check if we're not rendering too frequently
fn should_throttle(&self) -> bool {
let elapsed = self.last_display_time.elapsed();
elapsed < Duration::from_millis(self.config.max_render_time_ms)
}
}
impl DisplayBackend for OpenCVDisplay {
fn show_frame(&mut self, window_name: &str, frame: &core::Mat) -> Result<()> {
let start_time = Instant::now();
self.frame_counter += 1;
// Check if we should skip this frame
if !self.should_display_frame() {
self.update_stats(start_time.elapsed(), false);
return Ok(());
}
// Throttle rendering if needed
if self.should_throttle() {
self.update_stats(start_time.elapsed(), false);
return Ok(());
}
// Ensure window exists
if !self.created_windows.contains_key(window_name) {
self.create_window(window_name)?;
}
// Store config values before mutable borrow
let async_rendering = self.config.async_rendering;
let needs_scaling = (self.config.display_scale - 1.0).abs() > 0.01;
// Monitor input frame (but don't track as allocation since we don't own it)
let frame_size = estimate_mat_size(frame);
// Note: Don't record input_frame as allocation since it's passed by reference
// Display frame with memory optimization
let display_result = if needs_scaling {
// Apply scaling with memory reuse and display directly from buffer
self.scale_and_display(frame, window_name)?;
Ok(())
} else {
// Use original frame directly (no clone needed)
highgui::imshow(window_name, frame)
};
match display_result {
Ok(_) => {
self.last_display_time = Instant::now();
self.update_stats(start_time.elapsed(), true);
// Periodic memory reporting (every 3 seconds)
if self.last_memory_report.elapsed() > Duration::from_secs(3) {
// info!("=== Display Memory Report ===");
// System memory (this is what really matters)
// if let Some((used, free)) = get_system_memory_usage() {
// let total = used + free;
// let used_gb = used as f64 / 1_000_000_000.0;
// let free_gb = free as f64 / 1_000_000_000.0;
// let total_gb = total as f64 / 1_000_000_000.0;
// let used_percent = (used as f64 / total as f64) * 100.0;
//
// info!("System memory: {:.2}GB/{:.2}GB used ({:.1}%), {:.2}GB free",
// used_gb, total_gb, used_percent, free_gb);
//
// if used_percent > 80.0 {
// warn!("⚠️ HIGH MEMORY USAGE: {:.1}% - Consider reducing display scale", used_percent);
// }
//
// if used_percent > 90.0 {
// warn!("🚨 CRITICAL MEMORY USAGE: {:.1}% - System may become unstable", used_percent);
// }
// }
// Detailed memory leak detection
// info!("=== MEMORY LEAK DETECTION ===");
// info!("Frame counter: {}", self.frame_counter);
// info!("Frame size: {:.2}MB", frame_size as f64 / 1_000_000.0);
// info!("Display frames: {}, Dropped: {}",
// self.stats.frames_displayed,
// self.stats.frames_dropped);
//
// // Check render times array size
// info!("Render times array length: {}", self.last_render_times.len());
// if self.last_render_times.len() > 50 {
// warn!("⚠️ Render times array growing beyond expected size!");
// }
//
// // Check created windows map size
// info!("Created windows map size: {}", self.created_windows.len());
// if self.created_windows.len() > 10 {
// warn!("⚠️ Too many windows created: {}", self.created_windows.len());
// }
//
// // Check scaled buffer status
// if let Some(ref buffer) = self.scaled_frame_buffer {
// let buffer_size = estimate_mat_size(buffer);
// info!("Scaled buffer size: {:.2}MB", buffer_size as f64 / 1_000_000.0);
// info!("Scaled buffer dimensions: {}x{}", buffer.cols(), buffer.rows());
// if buffer_size > 50_000_000 { // > 50MB
// warn!("⚠️ Scaled buffer unusually large: {:.2}MB", buffer_size as f64 / 1_000_000.0);
// }
// } else {
// info!("Scaled buffer: None");
// }
//
// // Memory monitor internal state
// let mem_stats = self.memory_monitor.get_stats();
// info!("Memory monitor - Current: {:.2}MB, Peak: {:.2}MB, Allocations: {}, Active: {}",
// mem_stats.current_usage as f64 / 1_000_000.0,
// mem_stats.peak_usage as f64 / 1_000_000.0,
// mem_stats.allocation_count,
// mem_stats.active_allocations);
//
// if mem_stats.active_allocations > 100 {
// warn!("⚠️ Too many active allocations tracked: {}", mem_stats.active_allocations);
// // Print detailed allocation report
// self.memory_monitor.print_report();
// }
//
// self.last_memory_report = Instant::now();
}
}
Err(e) => {
self.update_stats(start_time.elapsed(), false);
return Err(e.into());
}
}
Ok(())
}
fn poll_key(&self, timeout_ms: i32) -> Result<i32> {
Ok(highgui::wait_key(timeout_ms)?)
}
fn resize_window(&self, window_name: &str, width: i32, height: i32) -> Result<()> {
highgui::resize_window(window_name, width, height)?;
Ok(())
}
fn create_window(&self, window_name: &str) -> Result<()> {
highgui::named_window(window_name, highgui::WINDOW_NORMAL)?;
Ok(())
}
fn destroy_window(&self, window_name: &str) -> Result<()> {
highgui::destroy_window(window_name)?;
Ok(())
}
fn get_stats(&self) -> DisplayStats {
self.stats.clone()
}
}
impl Drop for OpenCVDisplay {
fn drop(&mut self) {
let _ = highgui::destroy_all_windows();
}
}

View File

@ -74,6 +74,19 @@ pub struct GpsStatus {
pub camera_orientation: CameraOrientation,
}
impl Default for GpsStatus {
fn default() -> Self {
Self {
position: GeoPosition::default(),
satellites: 0,
timestamp: Utc::now(),
sync_status: "NoSync".to_string(),
time_accuracy_ms: 0.0,
camera_orientation: CameraOrientation::default(),
}
}
}
/// Configuration for the GPS module
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct GpsConfig {

View File

@ -10,6 +10,7 @@ pub const PLATFORM_SUPPORTS_GPIO: bool = false;
// 将所有模块重新导出,使它们对例子可见
pub mod camera;
pub mod config;
pub mod display;
pub mod utils;
pub mod gps;
pub mod sensors;

View File

@ -4,7 +4,7 @@
//! the event bus to receive updates instead of directly accessing shared state.
use anyhow::{Context, Result};
use chrono::{DateTime, Utc};
use chrono::{DateTime, Utc, Local, TimeZone};
use log::{debug, error, info, warn};
use opencv::{core, imgproc, prelude::*, types};
use serde::{Deserialize, Serialize};
@ -175,7 +175,9 @@ impl EventWatermark {
for content in &self.options.content {
match content {
WatermarkContent::Timestamp => {
lines.push(timestamp.format(&self.options.time_format).to_string());
// Convert UTC timestamp to local time
let local_timestamp = timestamp.with_timezone(&Local);
lines.push(local_timestamp.format(&self.options.time_format).to_string());
},
WatermarkContent::GpsCoordinates => {
lines.push(self.format_coordinates().await);

View File

@ -1,6 +1,6 @@
pub(crate) mod watermark;
pub(crate) mod event_watermark;
pub(crate) mod star_chart;
pub mod watermark;
pub mod event_watermark;
pub mod star_chart;
pub use watermark::{WatermarkOptions, WatermarkPosition, Watermark, WatermarkContent};
pub use event_watermark::EventWatermark;

File diff suppressed because one or more lines are too long

191
src/utils/memory_monitor.rs Normal file
View File

@ -0,0 +1,191 @@
use std::collections::HashMap;
use std::time::Instant;
use log::{info, warn};
use opencv::core::MatTraitConst;
/// Memory monitoring utility for tracking allocations and identifying leaks
pub struct MemoryMonitor {
allocations: HashMap<String, (usize, Instant)>,
peak_usage: usize,
current_usage: usize,
allocation_count: u64,
}
impl MemoryMonitor {
pub fn new() -> Self {
Self {
allocations: HashMap::new(),
peak_usage: 0,
current_usage: 0,
allocation_count: 0,
}
}
/// Record an allocation
pub fn record_allocation(&mut self, name: &str, size: usize) {
self.current_usage += size;
self.allocation_count += 1;
if self.current_usage > self.peak_usage {
self.peak_usage = self.current_usage;
}
self.allocations.insert(
format!("{}_{}", name, self.allocation_count),
(size, Instant::now())
);
// Log large allocations
if size > 10_000_000 { // > 10MB
warn!("Large allocation: {} - {:.2} MB", name, size as f64 / 1_000_000.0);
}
}
/// Record a deallocation
pub fn record_deallocation(&mut self, name: &str, size: usize) {
if self.current_usage >= size {
self.current_usage -= size;
}
}
/// Get current memory statistics
pub fn get_stats(&self) -> MemoryStats {
MemoryStats {
current_usage: self.current_usage,
peak_usage: self.peak_usage,
allocation_count: self.allocation_count,
active_allocations: self.allocations.len(),
}
}
/// Print memory report
pub fn print_report(&self) {
info!("=== Memory Report ===");
info!("Current usage: {:.2} MB", self.current_usage as f64 / 1_000_000.0);
info!("Peak usage: {:.2} MB", self.peak_usage as f64 / 1_000_000.0);
info!("Total allocations: {}", self.allocation_count);
info!("Active allocations: {}", self.allocations.len());
info!("Allocations HashMap capacity: {}", self.allocations.capacity());
// Check for potential HashMap memory issues
if self.allocations.capacity() > self.allocations.len() * 4 {
warn!("⚠️ HashMap has excessive capacity: {} vs {} used",
self.allocations.capacity(), self.allocations.len());
}
// Find largest allocations
let mut sorted_allocs: Vec<_> = self.allocations.iter().collect();
sorted_allocs.sort_by(|a, b| b.1.0.cmp(&a.1.0));
info!("Top 10 largest allocations:");
for (name, (size, time)) in sorted_allocs.iter().take(10) {
info!(" {} - {:.2} MB (age: {:.1}s)",
name,
*size as f64 / 1_000_000.0,
time.elapsed().as_secs_f64());
}
// Count allocation types
let mut type_counts: std::collections::HashMap<String, (usize, usize)> = std::collections::HashMap::new();
for (name, (size, _)) in &self.allocations {
let base_name = name.split('_').next().unwrap_or(name).to_string();
let entry = type_counts.entry(base_name).or_insert((0, 0));
entry.0 += 1; // count
entry.1 += size; // total size
}
info!("Allocation types summary:");
let mut type_vec: Vec<_> = type_counts.iter().collect();
type_vec.sort_by(|a, b| b.1.1.cmp(&a.1.1)); // Sort by total size
for (type_name, (count, total_size)) in type_vec.iter().take(5) {
info!(" {}: {} allocations, {:.2} MB total",
type_name, count, *total_size as f64 / 1_000_000.0);
}
}
/// Clear old allocations (for cleanup)
pub fn cleanup_old(&mut self, max_age_secs: u64) {
let cutoff = Instant::now() - std::time::Duration::from_secs(max_age_secs);
let before_count = self.allocations.len();
self.allocations.retain(|_, (_, time)| *time > cutoff);
let removed = before_count - self.allocations.len();
if removed > 0 {
info!("Cleaned up {} old allocation records", removed);
}
}
}
#[derive(Debug, Clone)]
pub struct MemoryStats {
pub current_usage: usize,
pub peak_usage: usize,
pub allocation_count: u64,
pub active_allocations: usize,
}
/// Estimate OpenCV Mat memory usage
pub fn estimate_mat_size(mat: &opencv::core::Mat) -> usize {
if mat.empty() {
return 0;
}
let rows = mat.rows() as usize;
let cols = mat.cols() as usize;
let channels = mat.channels() as usize;
let elem_size = match mat.depth() {
opencv::core::CV_8U | opencv::core::CV_8S => 1,
opencv::core::CV_16U | opencv::core::CV_16S => 2,
opencv::core::CV_32S | opencv::core::CV_32F => 4,
opencv::core::CV_64F => 8,
_ => 4, // Default
};
rows * cols * channels * elem_size
}
/// Get system memory usage (macOS specific)
#[cfg(target_os = "macos")]
pub fn get_system_memory_usage() -> Option<(u64, u64)> {
use std::process::Command;
let output = Command::new("vm_stat").output().ok()?;
let output_str = String::from_utf8_lossy(&output.stdout);
let mut free_pages = 0u64;
let mut active_pages = 0u64;
let mut inactive_pages = 0u64;
let mut wired_pages = 0u64;
for line in output_str.lines() {
if line.contains("Pages free:") {
if let Some(num_str) = line.split_whitespace().nth(2) {
free_pages = num_str.trim_end_matches('.').parse().unwrap_or(0);
}
} else if line.contains("Pages active:") {
if let Some(num_str) = line.split_whitespace().nth(2) {
active_pages = num_str.trim_end_matches('.').parse().unwrap_or(0);
}
} else if line.contains("Pages inactive:") {
if let Some(num_str) = line.split_whitespace().nth(2) {
inactive_pages = num_str.trim_end_matches('.').parse().unwrap_or(0);
}
} else if line.contains("Pages wired down:") {
if let Some(num_str) = line.split_whitespace().nth(3) {
wired_pages = num_str.trim_end_matches('.').parse().unwrap_or(0);
}
}
}
let page_size = 4096u64; // 4KB pages on macOS
let used_memory = (active_pages + inactive_pages + wired_pages) * page_size;
let free_memory = free_pages * page_size;
Some((used_memory, free_memory))
}
#[cfg(not(target_os = "macos"))]
pub fn get_system_memory_usage() -> Option<(u64, u64)> {
None // Not implemented for other platforms
}

View File

@ -1,2 +1,3 @@
// Utilities for working with OpenCV across different versions
pub mod opencv_compat;
pub mod memory_monitor;

View File

@ -0,0 +1,28 @@
use meteor_detect::overlay::star_chart::StarChart;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Test the embedded constellation data loading
println!("Testing embedded constellation data loading...");
let constellations = StarChart::load_embedded_constellation_data()?;
println!("✅ Successfully loaded {} constellations", constellations.len());
// Display info about first few constellations
for (i, constellation) in constellations.iter().take(5).enumerate() {
println!("{}. {} ({}): {} stars, {} lines",
i + 1,
constellation.long_name,
constellation.short_name,
constellation.star_positions.len(),
constellation.line_connections.len());
}
println!("\nTotal star positions across all constellations: {}",
constellations.iter().map(|c| c.star_positions.len()).sum::<usize>());
println!("Total line connections across all constellations: {}",
constellations.iter().map(|c| c.line_connections.len()).sum::<usize>());
Ok(())
}