feat: 相机模块完毕

This commit is contained in:
grabbit 2025-04-05 14:46:27 +08:00
parent cdb85c682f
commit 369c1081e3
11 changed files with 778 additions and 304 deletions

View File

@ -61,3 +61,11 @@ mockall = "0.11.4" # Mocking for tests
[[example]]
name = "cams_detector_demo"
path = "examples/cams_detector_demo.rs"
[[example]]
name = "camera_demo"
path = "examples/camera_demo.rs"
[[example]]
name = "meteor_detection_system"
path = "examples/meteor_detection_system.rs"

209
docs/camera_module.md Normal file
View File

@ -0,0 +1,209 @@
# Camera 模块使用文档
## 概述
Camera 模块是一个完整的相机控制和管理系统提供了高级API用于配置、捕获、存储和处理相机帧。该模块主要用于天文观测系统特别是流星检测但它的设计足够灵活可以用于任何需要视频捕获和处理的应用。
模块主要功能包括:
- 相机设备初始化和配置
- 实时视频流捕获
- 高效的帧缓冲管理
- 视频事件存储和处理
- 相机健康监控
## 核心组件
### 1. CameraController
`CameraController` 是主要的控制接口,它管理相机的整个生命周期和操作流程。
主要特性:
- 初始化和配置相机设备
- 启动和停止视频流捕获
- 管理帧缓冲
- 提供帧订阅机制
- 保存流星事件和相关视频
- 监控相机健康状态
### 2. OpenCVCamera
`OpenCVCamera` 是相机硬件的具体驱动实现基于OpenCV库。
主要特性:
- 支持各种相机设备(通过路径或索引)
- 配置相机分辨率、帧率、曝光和增益
- 控制相机聚焦
- 视频流管理
### 3. Frame 和 FrameBuffer
`Frame` 表示单个捕获的视频帧,包含图像数据、时间戳和元数据。
`FrameBuffer` 是一个高效的循环缓冲区,用于存储最近捕获的视频帧。
主要特性:
- 管理固定容量的帧缓冲
- 提供索引和时间范围的帧检索
- 提取特定事件周围的帧
## 关键数据结构
### CameraSettings
```rust
pub struct CameraSettings {
pub device: String, // 相机设备路径或索引
pub resolution: Resolution, // 分辨率设置
pub fps: u32, // 每秒帧数
pub exposure: ExposureMode, // 曝光模式
pub gain: u8, // 增益/ISO设置 (0-255)
pub focus_locked: bool, // 是否锁定无限远焦点
}
```
### Resolution
```rust
pub enum Resolution {
HD1080p, // 1920x1080
HD720p, // 1280x720
VGA, // 640x480
Custom { // 自定义分辨率
width: u32,
height: u32,
},
}
```
### ExposureMode
```rust
pub enum ExposureMode {
Auto, // 自动曝光控制
Manual(u32), // 手动曝光控制,以微秒为单位
}
```
### MeteorEvent
```rust
pub struct MeteorEvent {
pub id: uuid::Uuid, // 事件唯一标识符
pub timestamp: DateTime<Utc>, // 事件检测时间戳
pub confidence: f32, // 置信度分数 (0.0-1.0)
pub bounding_box: (u32, u32, u32, u32), // 图像中的坐标 (左上x, 左上y, 宽度, 高度)
pub video_path: String, // 保存的视频剪辑路径
}
```
## 使用流程
典型的使用流程包括以下步骤:
1. 创建和配置相机控制器
2. 初始化相机设备
3. 启动视频捕获
4. 订阅帧更新(可选)
5. 处理捕获的帧
6. 保存检测到的事件
7. 停止捕获并清理资源
## 示例代码
请参考 `examples/camera_demo.rs` 示例,了解更详细的使用方法。
## 注意事项
1. **设备兼容性**该模块主要支持Linux和macOS上的相机设备。在其他平台上可能需要额外的配置。
2. **资源管理**:由于视频捕获可能消耗大量内存,特别是在高分辨率和高帧率下,请确保合理配置帧缓冲区大小。
3. **依赖项**该模块依赖于OpenCV库进行图像处理和视频捕获请确保系统已正确安装相应的依赖项。
4. **错误恢复**:模块包含错误检测和恢复机制,但在关键应用中仍应实施额外的监控和故障恢复策略。
5. **性能考虑**:考虑系统资源限制,特别是在嵌入式设备上,应谨慎选择分辨率、帧率和缓冲区大小。
## 高级特性
### 帧订阅
模块使用tokio的广播通道实现了实时帧通知机制
```rust
// 订阅帧更新
let mut frame_receiver = camera_controller.subscribe_to_frames();
// 在单独的任务中处理帧
tokio::spawn(async move {
while let Ok(frame) = frame_receiver.recv().await {
// 处理新帧
}
});
```
### 事件存储
当检测到流星时,可以保存相关视频剪辑:
```rust
let event = camera_controller.save_meteor_event(
timestamp, // 事件发生时间
0.95, // 置信度
(100, 200, 50, 30), // 边界框 (x, y, 宽度, 高度)
5, // 事件前5秒
3 // 事件后3秒
).await?;
```
### 健康监控
可以使用健康检查功能监控相机状态:
```rust
let is_healthy = camera_controller.check_health().await?;
if !is_healthy {
// 实施恢复策略
}
```
## 故障排除
### 常见问题
1. **无法打开相机设备**
- 检查设备路径或索引是否正确
- 确认设备有适当的访问权限
- 验证设备未被其他应用程序使用
2. **分辨率/帧率设置无效**
- 不是所有相机都支持所有分辨率和帧率组合
- 检查相机规格和支持的格式
- 使用更保守的值或尝试自定义分辨率
3. **帧捕获停止或间歇性失败**
- 可能是相机硬件问题
- 检查USB连接或电源问题
- 考虑重新初始化或重启相机
4. **内存使用过高**
- 减小帧缓冲区容量
- 降低分辨率或帧率
- 确保正确处理和释放不再需要的帧
### 诊断工具
使用`get_status()`方法获取详细的相机状态信息:
```rust
let status = camera_controller.get_status().await?;
println!("Camera status: {}", status);
```
## 性能优化建议
1. 根据实际需求选择分辨率,无需使用高于实际需要的分辨率
2. 仅在必要时处理帧,避免冗余的图像处理操作
3. 使用帧订阅机制进行异步处理,避免阻塞主循环
4. 在资源受限的系统上,考虑使用区域感兴趣(ROI)处理帧子区域
5. 对于长时间运行的应用,定期检查健康状态并实施自动恢复机制

168
examples/camera_demo.rs Normal file
View File

@ -0,0 +1,168 @@
use anyhow::Result;
use log::{error, info};
use std::path::PathBuf;
use std::time::Duration;
use tokio::time;
// 导入Camera模块组件
use meteor_detect::camera::{CameraController, CameraSettings, ExposureMode, Resolution, Frame};
use meteor_detect::config::Config;
/// 这是一个简单的演示程序展示如何使用Camera模块的主要功能
#[tokio::main]
async fn main() -> Result<()> {
// 初始化日志
env_logger::init();
// 1. 创建配置
let config = create_demo_config();
// 2. 初始化相机控制器
info!("初始化相机控制器");
let mut camera_controller = CameraController::new(&config).await?;
// 3. 初始化相机设备
info!("初始化相机设备");
camera_controller.initialize().await?;
// 4. 启动视频捕获
info!("启动视频捕获");
camera_controller.start_capture().await?;
// 5. 获取相机状态
let status = camera_controller.get_status().await?;
info!("相机状态: {}", status);
// 6. 订阅帧更新
info!("设置帧处理器");
let mut frame_receiver = camera_controller.subscribe_to_frames();
// 在单独的任务中处理帧
let frame_processor = tokio::spawn(async move {
let mut frame_count = 0;
// 处理10个帧后退出
while let Ok(frame) = frame_receiver.recv().await {
frame_count += 1;
info!("收到帧 #{}: 时间戳={}", frame.index, frame.timestamp);
// 示例: 保存第5帧为图像文件
if frame_count == 5 {
info!("保存第5帧为图像文件");
let path = PathBuf::from("demo_frame.jpg");
if let Err(e) = frame.save_to_file(&path) {
error!("保存帧失败: {}", e);
}
}
if frame_count >= 10 {
info!("已处理10帧退出处理循环");
break;
}
}
info!("帧处理器已完成");
frame_count
});
// 7. 等待5秒然后模拟一个流星事件
info!("等待5秒后模拟流星事件...");
time::sleep(Duration::from_secs(5)).await;
// 获取当前时间作为事件时间戳
let event_timestamp = chrono::Utc::now();
// 模拟流星事件检测 - 保存事件视频
info!("保存流星事件");
match camera_controller
.save_meteor_event(
event_timestamp,
0.85, // 置信度
(100, 100, 200, 150), // 边界框 (x, y, width, height)
3, // 事件前3秒
2, // 事件后2秒
)
.await
{
Ok(event) => {
info!("保存了流星事件: id={}, 视频='{}'", event.id, event.video_path);
}
Err(e) => {
error!("保存流星事件失败: {}", e);
}
}
// 8. 更新相机设置
info!("更新相机设置");
let mut new_settings = config.camera.clone();
new_settings.resolution = Resolution::HD1080p;
new_settings.fps = 24;
camera_controller.update_settings(new_settings).await?;
// 9. 检查相机健康状态
info!("检查相机健康状态");
let is_healthy = camera_controller.check_health().await?;
info!("相机健康状态: {}", if is_healthy { "正常" } else { "异常" });
// 10. 等待帧处理完成
info!("等待帧处理器完成");
let processed_frames = frame_processor.await?;
info!("共处理了 {} 帧", processed_frames);
// 11. 停止视频捕获
info!("停止视频捕获");
camera_controller.stop_capture().await?;
info!("演示完成");
Ok(())
}
/// 创建用于演示的配置
fn create_demo_config() -> Config {
let camera_settings = CameraSettings {
// 在Linux上通常是"/dev/video0"在macOS上通常是"0"
device: "0".to_string(),
resolution: Resolution::HD720p,
fps: 30,
exposure: ExposureMode::Auto,
gain: 128,
focus_locked: true,
};
Config {
camera: camera_settings,
// 其他配置字段,使用其默认值
..Default::default()
}
}
//
// /// 如果需要,这里提供一个更复杂的帧处理示例
// /// 可以作为单独函数供参考
// #[allow(dead_code)]
// async fn process_frames_with_opencv(frame: &Frame) -> Result<()> {
// use opencv::{core, imgproc, prelude::*};
//
// // 1. 转换为灰度图像
// let mut gray = core::Mat::default();
// imgproc::cvt_color(&frame.mat, &mut gray, imgproc::COLOR_BGR2GRAY, 0)?;
//
// // 2. 应用高斯模糊
// let mut blurred = core::Mat::default();
// imgproc::gaussian_blur(
// &gray,
// &mut blurred,
// core::Size::new(5, 5),
// 1.5,
// 1.5,
// core::BORDER_DEFAULT,
// )?;
//
// // 3. 应用边缘检测
// let mut edges = core::Mat::default();
// imgproc::canny(&blurred, &mut edges, 50.0, 150.0, 3, false)?;
//
// // 保存处理后的图像(可选)
// opencv::imgcodecs::imwrite("processed_frame.jpg", &edges, &core::Vector::new())?;
//
// Ok(())
// }

View File

@ -0,0 +1,234 @@
use anyhow::Result;
use log::{error, info, warn};
use std::sync::Arc;
use std::time::Duration;
use tokio::sync::mpsc::{self, Sender};
use tokio::time;
// 导入Camera模块组件
use meteor_detect::camera::{CameraController, CameraSettings, ExposureMode, Resolution, FrameBuffer, Frame};
use meteor_detect::config::Config;
/// 演示如何使用相机模块构建完整的流星检测系统
#[tokio::main]
async fn main() -> Result<()> {
// 初始化日志
env_logger::init();
// 1. 创建和初始化相机
let config = create_detector_config();
let mut camera = CameraController::new(&config).await?;
info!("初始化相机系统");
camera.initialize().await?;
info!("启动相机捕获");
camera.start_capture().await?;
// 2. 获取帧缓冲器的引用(用于检测器访问)
let frame_buffer = camera.get_frame_buffer();
// 3. 创建通信通道
// 用于从检测器向主程序发送事件
let (event_tx, mut event_rx) = mpsc::channel(10);
// 用于从健康监控向主程序发送健康状态
let (health_tx, mut health_rx) = mpsc::channel(10);
// 4. 启动检测器任务,从缓冲区读取帧进行处理
start_detector_task(frame_buffer.clone(), event_tx).await;
// 5. 启动健康监控任务 - 只发送健康状态,而不是直接操作相机
start_health_monitor_task(health_tx).await;
// 6. 主线程处理检测事件和健康状态
info!("等待检测事件...");
let mut saved_events = 0;
// 创建轮询健康状态的定时器
let mut health_interval = time::interval(Duration::from_secs(30));
loop {
tokio::select! {
// 处理检测事件
Some(event) = event_rx.recv() => {
info!("收到流星检测事件: 时间={}, 置信度={:.2}",
event.timestamp, event.confidence);
// 保存事件视频
match camera.save_meteor_event(
event.timestamp,
event.confidence,
event.bounding_box,
5, // 事件前5秒
3, // 事件后3秒
).await {
Ok(saved_event) => {
info!("已保存流星事件视频: {}", saved_event.video_path);
saved_events += 1;
// 在这里可以添加额外处理,如上传到云存储、发送通知等
},
Err(e) => {
error!("保存流星事件失败: {}", e);
}
}
// 演示目的保存3个事件后退出
if saved_events >= 3 {
info!("已保存足够的事件,退出处理循环");
break;
}
},
// 处理健康检查的时间间隔
_ = health_interval.tick() => {
// 直接在主线程中执行健康检查
match camera.check_health().await {
Ok(is_healthy) => {
if !is_healthy {
warn!("相机健康检查失败,可能需要干预");
} else {
info!("相机健康状态: 正常");
}
},
Err(e) => {
error!("健康检查失败: {}", e);
}
}
// 获取和记录相机状态
if let Ok(status) = camera.get_status().await {
info!("相机状态: {}", status);
}
},
// 接收健康监控发送的消息(演示目的,实际中可以根据需要处理)
Some(health_msg) = health_rx.recv() => {
info!("收到健康监控消息: {}", health_msg);
},
// 如果所有通道都已关闭,退出循环
else => break,
}
}
// 7. 清理资源
info!("停止相机捕获");
camera.stop_capture().await?;
info!("流星检测系统演示完成");
Ok(())
}
/// 启动检测器任务
async fn start_detector_task(frame_buffer: Arc<FrameBuffer>, event_tx: Sender<DetectionEvent>) {
tokio::spawn(async move {
info!("启动流星检测器");
let mut detection_count = 0;
let mut last_detection_time = chrono::Utc::now();
// 模拟检测循环
loop {
// 模拟帧分析(实际应用中会连续分析每一帧)
time::sleep(Duration::from_secs(2)).await;
// 从帧缓冲区获取最新帧
if let Some(frame) = frame_buffer.get(0) {
let now = chrono::Utc::now();
// 确保距离上次检测至少10秒防止重复检测
if (now - last_detection_time).num_seconds() >= 10 {
// 这里应该是实际的检测算法
// 为演示目的,我们随机生成一些检测结果
let detection_seed = rand::random::<f32>();
if detection_seed < 0.3 { // 30%的概率检测到流星
detection_count += 1;
last_detection_time = now;
info!("检测到流星 #{}", detection_count);
// 发送检测事件到主线程
let event = DetectionEvent {
timestamp: now,
confidence: 0.7 + (detection_seed * 0.3), // 0.7-1.0之间的置信度
bounding_box: (
(rand::random::<f32>() * 800.0) as u32, // x
(rand::random::<f32>() * 600.0) as u32, // y
(50.0 + rand::random::<f32>() * 100.0) as u32, // width
(50.0 + rand::random::<f32>() * 100.0) as u32, // height
),
};
if event_tx.send(event).await.is_err() {
warn!("无法发送检测事件,接收端可能已关闭");
break;
}
}
}
} else {
warn!("无法获取帧,缓冲区可能为空");
time::sleep(Duration::from_millis(500)).await;
}
// 演示目的5次检测后退出
if detection_count >= 5 {
info!("已达到预设检测次数,检测器退出");
break;
}
}
});
}
/// 启动健康监控任务
async fn start_health_monitor_task(health_tx: Sender<String>) {
tokio::spawn(async move {
info!("启动健康监控任务 (模拟)");
let mut interval = time::interval(Duration::from_secs(45));
// 简单模拟健康监控,不直接访问相机
// 在实际应用中,这可能会监控系统各个部分的状态
loop {
interval.tick().await;
// 发送简单的健康状态消息回主线程
// 这是一个简化的演示
if health_tx.send("系统运行正常".to_string()).await.is_err() {
warn!("无法发送健康状态,接收端可能已关闭");
break;
}
}
});
}
/// 创建用于检测系统的配置
fn create_detector_config() -> Config {
let camera_settings = CameraSettings {
device: "0".to_string(),
// 对于流星检测,建议使用更高分辨率
resolution: Resolution::HD1080p,
fps: 30,
// 对于夜间观测,手动曝光通常更好
exposure: ExposureMode::Manual(500000), // 500毫秒曝光
// 增加增益以提高低光灵敏度
gain: 200,
focus_locked: true,
};
Config {
camera: camera_settings,
// 其他配置字段,使用其默认值
..Default::default()
}
}
/// 流星检测事件
struct DetectionEvent {
/// 事件时间戳
timestamp: chrono::DateTime<chrono::Utc>,
/// 检测置信度 (0-1)
confidence: f32,
/// 边界框 (x, y, 宽度, 高度)
bounding_box: (u32, u32, u32, u32),
}

View File

@ -33,7 +33,7 @@ pub struct CameraController {
impl CameraController {
/// Create a new camera controller with the given configuration
pub async fn new(config: &crate::Config) -> Result<Self> {
pub async fn new(config: &crate::config::Config) -> Result<Self> {
// Extract camera settings from config (placeholder for now)
let settings = config.clone().camera;
@ -129,7 +129,11 @@ impl CameraController {
tokio::spawn(async move {
let frame_interval = Duration::from_secs_f64(1.0 / fps as f64);
let mut interval = time::interval(frame_interval);
// Error tracking for recovery
let mut consecutive_errors = 0;
const ERROR_THRESHOLD: u32 = 10; // After 10 errors, try to reinitialize
info!("Starting camera capture at {} fps", fps);
loop {
@ -137,6 +141,9 @@ impl CameraController {
match stream.capture_frame() {
Ok(mat) => {
// Reset error counter on success
consecutive_errors = 0;
// Create a new frame with timestamp
let frame = Frame::new(mat, Utc::now(), frame_count);
frame_count += 1;
@ -151,6 +158,34 @@ impl CameraController {
}
Err(e) => {
error!("Failed to capture frame: {}", e);
consecutive_errors += 1;
if consecutive_errors >= ERROR_THRESHOLD {
error!("Too many consecutive errors ({}), attempting to reinitialize camera stream", consecutive_errors);
// Try to recreate the stream - this is a simplified version
// In a real implementation, you might want to signal the main thread
// to fully reinitialize the camera
match OpenCVCamera::open("0") { // Note: simplified, should use original device
Ok(mut camera) => {
// Configure minimal settings
let _ = camera.set_format(Resolution::HD720p);
let _ = camera.set_fps(fps);
// Try to start streaming again
match camera.start_streaming() {
Ok(new_stream) => {
info!("Successfully reinitialized camera stream");
stream = new_stream;
consecutive_errors = 0;
},
Err(e) => error!("Failed to restart camera streaming: {}", e)
}
},
Err(e) => error!("Failed to reopen camera: {}", e)
}
}
// Small delay to avoid tight error loop
time::sleep(Duration::from_millis(100)).await;
}
@ -268,8 +303,36 @@ impl CameraController {
// Create a video file name
let video_path = event_dir.join("event.mp4").to_string_lossy().to_string();
// TODO: Call FFmpeg to convert frames to video
// This would be done by spawning an external process
// Use tokio::process to call FFmpeg to convert frames to video
use tokio::process::Command;
info!("Converting frames to video using FFmpeg");
let ffmpeg_result = Command::new("ffmpeg")
.args(&[
"-y", // Overwrite output file if it exists
"-framerate", &self.settings.fps.to_string(),
"-i", &event_dir.join("frame_%04d.jpg").to_string_lossy(),
"-c:v", "libx264",
"-preset", "medium",
"-crf", "23",
&video_path
])
.output()
.await;
match ffmpeg_result {
Ok(output) => {
if !output.status.success() {
let stderr = String::from_utf8_lossy(&output.stderr);
warn!("FFmpeg warning: {}", stderr);
} else {
info!("Successfully created video at: {}", video_path);
}
},
Err(e) => {
warn!("Failed to execute FFmpeg: {}. Video will not be created, only individual frames saved.", e);
}
}
// Create and return the event
let event = MeteorEvent {
@ -284,6 +347,33 @@ impl CameraController {
Ok(event)
}
/// Check the health of the camera system
pub async fn check_health(&self) -> Result<bool> {
// If camera is not running, consider it healthy
if !self.is_running {
return Ok(true);
}
// Check if we've received frames recently
if let Some(frame) = self.frame_buffer.get(0) {
let now = Utc::now();
let elapsed = now.signed_duration_since(frame.timestamp);
// If no new frames in last 5 seconds, camera may be stuck
if elapsed.num_seconds() > 5 {
warn!("Camera health check: No new frames received in {} seconds", elapsed.num_seconds());
return Ok(false);
}
} else if self.is_running {
// If no frames at all but camera is running, something is wrong
warn!("Camera health check: Camera is running but no frames in buffer");
return Ok(false);
}
// All checks passed
Ok(true)
}
/// Get the current status of the camera
pub async fn get_status(&self) -> Result<serde_json::Value> {
let frame_buffer_stats = {
@ -341,3 +431,45 @@ impl CameraController {
Ok(status)
}
}
impl Drop for CameraController {
fn drop(&mut self) {
info!("Cleaning up camera controller resources");
// Stop streaming if running
if self.is_running {
// Use block_in_place since drop can't be async
if let Err(e) = tokio::task::block_in_place(|| {
futures::executor::block_on(self.stop_capture())
}) {
error!("Error stopping camera during cleanup: {}", e);
}
}
// Release camera resources
self.camera = None;
self.stream = None;
info!("Camera controller resources released");
}
}
// 为CameraController实现Clone trait
impl Clone for CameraController {
fn clone(&self) -> Self {
// 创建新的广播通道
let (frame_tx, _) = tokio::sync::broadcast::channel(30);
Self {
settings: self.settings.clone(),
camera: None, // 不克隆相机实例,因为它包含底层资源
stream: None, // 不克隆流,因为它也包含底层资源
frame_buffer: self.frame_buffer.clone(),
frame_count: self.frame_count,
is_running: self.is_running,
frame_tx, // 使用新创建的广播通道
events_dir: self.events_dir.clone(),
}
}
}

View File

@ -35,10 +35,6 @@ impl Frame {
)?;
Ok(())
}
pub fn is_full() {
}
}
/// A circular buffer for storing video frames

View File

@ -1,288 +0,0 @@
// use anyhow::{anyhow, Context, Result};
// use log::{debug, error, info, warn};
// use std::path::Path;
// use v4l::buffer::Type;
// use v4l::io::traits::CaptureStream;
// use v4l::prelude::*;
// use v4l::video::Capture;
// use v4l::{Format, FourCC};
// use opencv::{core, imgproc, prelude::*};
// use crate::utils::opencv_compat::convert_color;
// use crate::camera::{ExposureMode, Resolution};
// /// V4L2 camera driver for star-light cameras
// pub struct V4l2Camera {
// /// The open device handle
// device: Device,
// /// The current camera format
// format: Format,
// /// Whether the camera is currently streaming
// is_streaming: bool,
// }
// impl V4l2Camera {
// /// Open a camera device by path
// pub fn open<P: AsRef<Path>>(path: P) -> Result<Self> {
// let device = Device::with_path(path.as_ref())
// .context("Failed to open camera device")?;
// info!(
// "Opened camera: {} ({})",
// device.info().card,
// device.info().driver
// );
// // Get the current format
// let format = device
// .format()
// .context("Failed to get camera format")?;
// debug!("Initial camera format: {:?}", format);
// Ok(Self {
// device,
// format,
// is_streaming: false,
// })
// }
// /// Set the camera resolution and pixel format
// pub fn set_format(&mut self, resolution: Resolution) -> Result<()> {
// let (width, height) = resolution.dimensions();
// // Try to set format to MJPEG or YUYV first, then fall back to others
// let formats = [FourCC::new(b"MJPG"), FourCC::new(b"YUYV")];
// let mut success = false;
// let mut last_error = None;
// for &fourcc in &formats {
// let mut format = Format::new(width, height, fourcc);
// match self.device.set_format(&mut format) {
// Ok(_) => {
// self.format = format;
// success = true;
// break;
// }
// Err(e) => {
// last_error = Some(e);
// warn!("Failed to set format {:?}: {}", fourcc, last_error.as_ref().unwrap());
// }
// }
// }
// if !success {
// return Err(anyhow!(
// "Failed to set any supported format: {:?}",
// last_error.unwrap()
// ));
// }
// info!(
// "Set camera format: {}×{} {}",
// self.format.width, self.format.height,
// String::from_utf8_lossy(&self.format.fourcc.repr)
// );
// Ok(())
// }
// /// Set the camera frame rate
// pub fn set_fps(&mut self, fps: u32) -> Result<()> {
// if let Some(params) = self.device.params() {
// let mut params = params.context("Failed to get camera parameters")?;
// params.set_frames_per_second(fps, 1);
// self.device
// .set_params(&params)
// .context("Failed to set frame rate")?;
// info!("Set camera frame rate: {} fps", fps);
// } else {
// warn!("Camera does not support frame rate adjustment");
// }
// Ok(())
// }
// /// Set camera exposure mode and value
// pub fn set_exposure(&mut self, mode: ExposureMode) -> Result<()> {
// // First, set auto/manual mode
// let ctrl_id = v4l::control::id::EXPOSURE_AUTO;
// let auto_value = match mode {
// ExposureMode::Auto => 3, // V4L2_EXPOSURE_AUTO
// ExposureMode::Manual(_) => 1, // V4L2_EXPOSURE_MANUAL
// };
// self.device
// .set_control(ctrl_id, auto_value)
// .context("Failed to set exposure mode")?;
// // If manual, set the exposure value
// if let ExposureMode::Manual(exposure_time) = mode {
// // Exposure time in microseconds
// let ctrl_id = v4l::control::id::EXPOSURE_ABSOLUTE;
// self.device
// .set_control(ctrl_id, exposure_time as i64)
// .context("Failed to set exposure time")?;
// }
// info!("Set camera exposure: {:?}", mode);
// Ok(())
// }
// /// Set camera gain (ISO)
// pub fn set_gain(&mut self, gain: u8) -> Result<()> {
// let ctrl_id = v4l::control::id::GAIN;
// self.device
// .set_control(ctrl_id, gain as i64)
// .context("Failed to set gain")?;
// info!("Set camera gain: {}", gain);
// Ok(())
// }
// /// Lock focus at infinity (if supported)
// pub fn lock_focus_at_infinity(&mut self) -> Result<()> {
// // First, set focus mode to manual
// let auto_focus_id = v4l::control::id::FOCUS_AUTO;
// if let Ok(_) = self.device.set_control(auto_focus_id, 0) {
// // Then set focus to infinity (typically maximum value)
// let focus_id = v4l::control::id::FOCUS_ABSOLUTE;
// // Get the range of the control
// if let Ok(control) = self.device.control(focus_id) {
// let max_focus = control.maximum();
// if let Ok(_) = self.device.set_control(focus_id, max_focus) {
// info!("Locked focus at infinity (value: {})", max_focus);
// return Ok(());
// }
// }
// warn!("Failed to set focus to infinity");
// } else {
// warn!("Camera does not support focus control");
// }
// Ok(())
// }
// /// Start streaming from the camera
// pub fn start_streaming(&mut self) -> Result<V4l2CaptureStream> {
// let queue = MmapStream::with_buffers(&self.device, Type::VideoCapture, 4)
// .context("Failed to create capture stream")?;
// self.is_streaming = true;
// info!("Started camera streaming");
// Ok(V4l2CaptureStream {
// stream: queue,
// format: self.format.clone(),
// })
// }
// /// Stop streaming from the camera
// pub fn stop_streaming(&mut self) -> Result<()> {
// // The streaming will be stopped when the CaptureStream is dropped
// self.is_streaming = false;
// info!("Stopped camera streaming");
// Ok(())
// }
// /// Check if the camera is currently streaming
// pub fn is_streaming(&self) -> bool {
// self.is_streaming
// }
// /// Get current format width
// pub fn width(&self) -> u32 {
// self.format.width
// }
// /// Get current format height
// pub fn height(&self) -> u32 {
// self.format.height
// }
// /// Get current format pixel format
// pub fn pixel_format(&self) -> FourCC {
// self.format.fourcc
// }
// }
// /// Wrapper around V4L2 capture stream
// pub struct V4l2CaptureStream {
// stream: MmapStream,
// format: Format,
// }
// impl V4l2CaptureStream {
// /// Capture a single frame from the camera
// pub fn capture_frame(&mut self) -> Result<core::Mat> {
// let buffer = self.stream.next()
// .context("Failed to capture frame")?;
// let width = self.format.width as i32;
// let height = self.format.height as i32;
// // Convert the buffer to an OpenCV Mat based on the pixel format
// let mat = match self.format.fourcc {
// // MJPEG format
// f if f == FourCC::new(b"MJPG") => {
// // Decode JPEG data
// let data = buffer.data();
// let vec_data = unsafe {
// std::slice::from_raw_parts(data.as_ptr(), data.len())
// }.to_vec();
// let buf = core::Vector::from_slice(&vec_data);
// let img = opencv::imgcodecs::imdecode(&buf, opencv::imgcodecs::IMREAD_COLOR)?;
// img
// },
// // YUYV format
// f if f == FourCC::new(b"YUYV") => {
// let data = buffer.data();
// // Create a Mat from the YUYV data
// let mut yuyv = unsafe {
// let bytes_per_pixel = 2; // YUYV is 2 bytes per pixel
// let step = width as usize * bytes_per_pixel;
// core::Mat::new_rows_cols_with_data(
// height,
// width,
// core::CV_8UC2,
// data.as_ptr() as *mut _,
// step,
// )?
// };
// // Convert YUYV to BGR
// let mut bgr = core::Mat::default()?;
// convert_color(&yuyv, &mut bgr, imgproc::COLOR_YUV2BGR_YUYV, 0)?;
// bgr
// },
// // Unsupported format
// _ => {
// return Err(anyhow!(
// "Unsupported pixel format: {}",
// String::from_utf8_lossy(&self.format.fourcc.repr)
// ));
// }
// };
// Ok(mat)
// }
// }
// impl Drop for V4l2CaptureStream {
// fn drop(&mut self) {
// debug!("V4L2 capture stream dropped");
// }
// }

View File

@ -77,7 +77,7 @@ pub struct GpsController {
impl GpsController {
/// Create a new GPS controller with the given configuration
pub async fn new(config: &crate::Config) -> Result<Self> {
pub async fn new(config: &crate::config::Config) -> Result<Self> {
// Extract GPS settings from config
let gps_config = config.gps.clone();

20
src/lib.rs Normal file
View File

@ -0,0 +1,20 @@
// Platform detection
#[cfg(target_os = "linux")]
pub const PLATFORM_SUPPORTS_GPIO: bool = true;
#[cfg(not(target_os = "linux"))]
pub const PLATFORM_SUPPORTS_GPIO: bool = false;
// 将所有模块重新导出,使它们对例子可见
pub mod camera;
pub mod config;
pub mod utils;
pub mod gps;
pub mod sensors;
pub mod detection;
pub mod streaming;
pub mod storage;
pub mod communication;
pub mod monitoring;
pub mod overlay;
pub mod hooks;

View File

@ -1,9 +1,3 @@
// Platform detection
#[cfg(target_os = "linux")]
pub const PLATFORM_SUPPORTS_GPIO: bool = true;
#[cfg(not(target_os = "linux"))]
pub const PLATFORM_SUPPORTS_GPIO: bool = false;
mod camera;
mod config;
@ -28,6 +22,7 @@ use tokio::signal;
use futures::future::join_all;
pub use config::Config;
use meteor_detect::PLATFORM_SUPPORTS_GPIO;
use crate::overlay::Watermark;
/// Main entry point for the meteor detection system

View File

@ -84,7 +84,7 @@ pub struct SensorController {
impl SensorController {
/// Create a new sensor controller with the given configuration
pub async fn new(config: &crate::Config) -> Result<Self> {
pub async fn new(config: &crate::config::Config) -> Result<Self> {
// Extract sensor settings from config
let sensor_config = config.sensors.clone();