- Fix user.id -> user.userId in subscription and device registration - Fix StatusIndicator to use DeviceStatus enum instead of strings - Add Recharts type declarations for React 19 compatibility - Add lightningcss WASM fallback for CI builds - Remove unused LogsModule from backend - Simplify CLAUDE.md documentation Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
16 KiB
CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Project Overview
Distributed meteor monitoring network with microservices architecture. Four main services communicate via REST APIs and SQS queues.
Core Services
| Service | Technology | Entry Point | Port |
|---|---|---|---|
| meteor-frontend | Next.js 15, React 19, TailwindCSS 4 | src/app/layout.tsx |
3000 |
| meteor-web-backend | NestJS, TypeORM, PostgreSQL | src/main.ts |
3001 |
| meteor-compute-service | Go 1.24, AWS SDK | cmd/meteor-compute-service/main.go |
- |
| meteor-edge-client | Rust, Tokio, OpenCV | src/main.rs → src/app.rs |
- |
Development Commands
# Full stack development
npm run dev # Start backend (3001) + frontend (3000) concurrently
npm run dev:frontend # Next.js only
npm run dev:backend # NestJS only
# Building
npm run build # Build all services
npm run build:frontend
npm run build:backend
# Testing
npm run test # Unit tests (frontend + backend)
npm run test:frontend # Jest + Testing Library
npm run test:backend # Jest for NestJS
npm run test:integration # Backend integration with real DB
npm run test:e2e # Playwright E2E tests
npm run test:fullstack # Complete test suite
# Single test execution
cd meteor-web-backend && npm test -- --testPathPattern=events.service
cd meteor-frontend && npm test -- --testPathPattern=gallery
cd meteor-frontend && npx playwright test --grep="Gallery page"
# Test environment
./test-setup.sh # Start LocalStack + test PostgreSQL (port 5433)
npm run clean:test # Teardown test containers
# Linting
npm run lint # Lint all
npm run lint:frontend # ESLint for Next.js
npm run lint:backend # ESLint + Prettier for NestJS
# Database migrations (from meteor-web-backend/)
npm run migrate:up
npm run migrate:down
npm run migrate:create <name>
Edge Client (Rust)
cd meteor-edge-client
cargo build --release
cargo run -- run --camera sim:pattern:meteor # Simulated camera
cargo run -- run --camera file:video.mp4 # File replay
./build.sh # Cross-compile for Raspberry Pi ARM64
# Vida Meteor Detection Testing
./target/debug/meteor-edge-client test-vida <video.mp4> # Test Vida detection on video
./target/debug/meteor-edge-client run --camera file:video.mp4 # Run with video file
# Advanced Memory Management Testing
./target/debug/meteor-edge-client test # Core frame pool tests
./target/debug/meteor-edge-client test-adaptive # Adaptive pool management
./target/debug/meteor-edge-client test-integration # Complete integration tests
./target/debug/meteor-edge-client test-ring-buffer # Ring buffer & memory mapping
./target/debug/meteor-edge-client test-hierarchical-cache # Hierarchical cache system
# Production Monitoring & Optimization
./target/debug/meteor-edge-client monitor # Production monitoring system
# Phase 5: End-to-End Integration & Deployment
./target/debug/meteor-edge-client test-integrated-system # Integrated memory system
./target/debug/meteor-edge-client test-camera-integration # Camera memory integration
./target/debug/meteor-edge-client test-meteor-detection # Real-time meteor detection
./demo_integration_test.sh # Integration test
Compute Service (Go)
cd meteor-compute-service
go run cmd/meteor-compute-service/main.go
go test ./...
Data Flow
- Edge Client captures frames via camera, detects meteors, uploads events to backend
- Backend API receives uploads, stores in S3, pushes to SQS for processing
- Compute Service consumes SQS, validates events, writes analysis results to PostgreSQL
- Frontend displays gallery, dashboard, and analysis views via React Query
Database Schema
Uses node-pg-migrate. Entities in meteor-web-backend/src/entities/:
- Users:
UserProfile+UserIdentity(multiple auth methods) - Devices:
Device+InventoryDevice+DeviceRegistration+DeviceCertificate - Events:
RawEvent→ValidatedEvent→AnalysisResult - Subscriptions:
SubscriptionPlan+UserSubscription+PaymentRecord - Weather:
WeatherStation+WeatherObservation+WeatherForecast
API Structure
- Base path:
/api/v1/ - JWT Bearer token authentication
- Key modules:
AuthModule,DevicesModule,EventsModule,PaymentsModule,SubscriptionModule
Frontend Architecture
- App Router in
src/app/(dashboard, gallery, analysis, devices, settings) - Components in
src/components/organized by domain (charts, device-registration, ui) - API services in
src/services/ - Auth context in
src/contexts/auth-context.tsx - React Query for server state, React Hook Form + Zod for forms
Backend Architecture
- NestJS modules with domain-driven organization
- TypeORM entities with PostgreSQL
- Pino structured logging with correlation IDs
- Prometheus metrics via MetricsModule
- Stripe integration in PaymentsModule
Coding Standards
- TypeScript: ESLint + Prettier, PascalCase for classes, camelCase for functions
- Rust:
cargo fmt, snake_case modules, prefer?overunwrap() - Go:
go fmt+go vet, standard project layout
Infrastructure
- Terraform configs in
infrastructure/ - AWS: S3 (media), SQS (event queue), RDS (PostgreSQL), CloudWatch (logs/metrics)
- LocalStack for local development
- Docker Compose for test environment
Key Type Considerations
- User type uses
userIdnotid(defined insrc/contexts/auth-context.tsx) DeviceStatusis an enum - use enum values not strings with StatusIndicator- Recharts has React 19 compatibility issues - type declarations in
src/types/recharts.d.ts
Additional Testing Commands
# Single E2E test
cd meteor-frontend && npx playwright test --grep="Gallery page"
# Integration test for specific feature
cd meteor-web-backend && npm run test:integration -- --testPathPattern=events
# Rust edge client memory management tests
cd meteor-edge-client && cargo test
cd meteor-edge-client && ./target/debug/meteor-edge-client test
cd meteor-edge-client && ./target/debug/meteor-edge-client test-adaptive
cd meteor-edge-client && ./target/debug/meteor-edge-client test-integration
cd meteor-edge-client && ./target/debug/meteor-edge-client test-ring-buffer
Production Deployment
Docker Support
- Dockerfiles for all services
- Next.js standalone output mode
- Multi-stage builds for optimization
Infrastructure
- Terraform configurations in
infrastructure/ - AWS services: RDS, S3, SQS, CloudWatch
- Environment-based configuration
Observability
- Structured JSON logging throughout stack
- Metrics collection with Prometheus
- Health check endpoints
- Correlation IDs for request tracking
Advanced Memory Management (Edge Client)
The meteor edge client features a sophisticated 4-phase memory optimization system designed for high-performance astronomical data processing on resource-constrained devices.
Phase 1: Zero-Copy Architecture
- Arc-based frame sharing eliminates unnecessary memory copies
- RAII pattern ensures automatic resource cleanup
- Event-driven processing with efficient memory propagation
Phase 2: Hierarchical Frame Pools
- Multiple pool sizes: 64KB, 256KB, 900KB, 2MB buffers
- Adaptive capacity management based on memory pressure
- Historical metrics tracking for intelligent resizing
- Cross-platform memory pressure detection
Key Features:
- Automatic pool resizing based on system memory usage (70%/80%/90% thresholds)
- Zero-allocation buffer acquisition with automatic return
- Comprehensive statistics tracking and monitoring
- Memory leak detection and prevention
Phase 3: Advanced Streaming & Caching
Week 1: Lock-Free Ring Buffers & Memory Mapping
- Lock-free ring buffers using atomic operations for concurrent access
- Memory-mapped I/O for large astronomical datasets
- Cross-platform implementation (Unix libc, Windows winapi)
- Performance benchmarks: >3M frames/sec throughput
Week 2: Hierarchical Cache System
- Multi-level cache architecture (L1/L2/L3) with different eviction policies
- Astronomical data optimization with metadata support
- Intelligent prefetching based on access patterns
- Memory pressure adaptation with configurable limits
Cache Performance:
- L1: Hot data, LRU eviction, fastest access
- L2: Warm data, LFU eviction with frequency tracking
- L3: Cold data, time-based eviction for historical access
- Cache hit rates: >80% for typical astronomical workloads
Phase 4: Production Optimization & Monitoring
Real-Time Monitoring System
- Health check monitoring with component-level status tracking
- Performance profiling with latency histograms and percentiles
- Alert management with configurable thresholds and suppression
- Comprehensive diagnostics including system resource tracking
Key Metrics Tracked:
- Memory usage and efficiency ratios
- Cache hit rates across all levels
- Frame processing latency (P50, P95, P99)
- System resource utilization
- Error rates and alert conditions
Production Features:
- Real-time health status reporting
- Configurable alert thresholds
- Performance profiling with microsecond precision
- System diagnostics with resource tracking
- Automated metric aggregation and retention
Memory Management Testing Commands
cd meteor-edge-client
# Phase 2 Testing
./target/release/meteor-edge-client test # Core frame pools
./target/release/meteor-edge-client test-adaptive # Adaptive management
./target/release/meteor-edge-client test-integration # Integration tests
# Phase 3 Testing
./target/release/meteor-edge-client test-ring-buffer # Ring buffers & memory mapping
./target/release/meteor-edge-client test-hierarchical-cache # Cache system
# Phase 4 Production Monitoring
./target/release/meteor-edge-client monitor # Live monitoring system
# Phase 5 End-to-End Integration
./target/release/meteor-edge-client test-integrated-system # Integrated memory system
./target/release/meteor-edge-client test-camera-integration # Camera memory integration
./target/release/meteor-edge-client test-meteor-detection # Real-time meteor detection
Phase 5: End-to-End Integration & Deployment
The final phase integrates all memory management components into a cohesive system for real-time meteor detection with camera integration.
Integrated Memory System
- Unified Architecture: All memory components work together seamlessly
- Multi-Configuration Support: Raspberry Pi and high-performance server configurations
- Auto-Optimization: Dynamic performance tuning based on system conditions
- Health Monitoring: Comprehensive system health reporting with recommendations
Key Components:
- Hierarchical frame pools with adaptive management
- Ring buffer streaming for astronomical frames
- Multi-level caching with prefetching
- Production monitoring with alerts
- Camera integration with memory-optimized capture
Camera Memory Integration
- Memory-Optimized Capture: Integration with hierarchical frame pools
- Real-Time Processing: Zero-copy frame processing pipeline
- Buffer Management: Adaptive capture buffer pools with memory pressure handling
- Performance Monitoring: Camera-specific metrics and health reporting
Camera Features:
- Multiple configuration support (Pi camera, performance camera)
- Capture buffer pool with automatic optimization
- Real-time statistics collection
- Memory pressure detection and response
- Health monitoring with diagnostic recommendations
Real-Time Meteor Detection Pipeline
- Vida Algorithm: Scientific meteor detection based on Vida et al. 2016 paper
- Dual Detection Paths: FireballDetector (K1=4) and MeteorDetector (K1=1.5)
- FTP Compression: 256 frames → 4 statistical images (maxpixel, avepixel, stdpixel, maxframe)
- Memory-Optimized Processing: Integrated with zero-copy architecture
- Real-Time Performance: ~100ms/block processing latency
Production-Ready Features
- Raspberry Pi Optimization: Conservative memory usage and CPU utilization
- Real-Time Constraints: Guaranteed processing latency limits
- Error Recovery: Robust error handling with automatic recovery
- Performance Metrics: Comprehensive detection and system metrics
- Memory Efficiency: Optimized for resource-constrained environments
Performance Benchmarks
- Frame Pool Operations: >100K allocations/sec with zero memory leaks
- Ring Buffer Throughput: >3M frames/sec with concurrent access
- Cache Performance: >50K lookups/sec with 80%+ hit rates
- Memory Efficiency: <2x growth under sustained load
- Production Monitoring: Real-time metrics with <50μs overhead
This advanced memory management system enables the meteor edge client to:
- Process high-resolution astronomical frames with minimal memory overhead
- Adapt to varying system memory conditions automatically
- Provide production-grade observability and monitoring
- Maintain high performance on resource-constrained Raspberry Pi devices
- Support real-time meteor detection with sub-millisecond processing latency
Vida Meteor Detection Algorithm (Edge Client)
The edge client implements the Vida detection algorithm based on "Open-source meteor detection software for low-cost single-board computers" (Vida et al., 2016). This is the same algorithm used by the Croatian Meteor Network (CMN) and RMS project.
Architecture Overview
输入帧流 (视频/摄像头)
↓
[FrameAccumulator] - 256帧 FTP 压缩
↓
[AccumulatedFrame] - maxpixel, avepixel, stdpixel, maxframe
├─→ [FireballDetector] - K1=4, 3D点云分析 → 火球检测
└─→ [MeteorDetector] - K1=1.5, Hough变换 → 流星检测
↓
[VidaDetectionController] - 协调和回调
↓
[FtpDetectWriter] - FTPdetectinfo 格式输出
Core Modules (src/detection/vida/)
| 模块 | 功能 | 代码行数 |
|---|---|---|
frame_accumulator.rs |
FTP 压缩引擎 | ~1200 |
accumulated_frame.rs |
FTP 数据结构 | ~700 |
fireball_detector.rs |
火球检测 (K1=4) | ~800 |
meteor_detector.rs |
流星检测 (K1=1.5) | ~1000 |
line_detector.rs |
Hough + 3D 线检测 | ~800 |
morphology.rs |
形态学预处理 | ~950 |
star_extractor.rs |
星点提取和天空质量 | ~1000 |
calibration.rs |
测量校准 | ~1000 |
ftpdetect.rs |
FTPdetectinfo 输出 | ~450 |
controller.rs |
主控制器 | ~470 |
config.rs |
配置管理 | ~375 |
Detection Parameters
FireballDetector (明亮火球):
k1_threshold: 4.0 (标准差倍数)min_intensity: 40 (最小像素强度)- 使用 3D 点云分析
MeteorDetector (普通流星):
k1_threshold: 1.5 (标准差倍数,RMS 推荐)j1_offset: 9.0 (绝对强度偏移)max_white_ratio: 0.07 (最大白像素比例)- 使用 Hough 变换 + 时间窗口
FTP Compression Format
256帧压缩为4个统计图像:
- maxpixel: 每像素最大值 (流星轨迹可见)
- avepixel: 平均值 (排除前4大值,天空背景)
- stdpixel: 标准差 (变化区域)
- maxframe: 最大值出现的帧号 (时间信息)
Detection Pipeline
- 帧累积: 收集256帧,实时计算统计
- 阈值化: 应用 K1×σ + J1 阈值
- 形态学处理: 清理 → 桥接 → 闭合 → 细化
- 线检测: Hough变换(2D) 或 点云分析(3D)
- 时间传播: 7个重叠窗口验证
- 质心提取: 亚像素精度定位
- 输出: FTPdetectinfo 格式
Testing Commands
cd meteor-edge-client
# 在视频文件上测试 Vida 检测
./target/debug/meteor-edge-client test-vida video.mp4
# 使用摄像头运行
./target/debug/meteor-edge-client run --camera device:0
# 使用视频文件运行
./target/debug/meteor-edge-client run --camera file:video.mp4
Performance
- 处理速度: ~100ms/block (256帧)
- 误检率: 0-2 个/block (优化后)
- 内存效率: 在线统计,无需存储原始帧