Add warpgate MVP implementation with hardened supervisor
Full Rust implementation of the warpgate NAS cache proxy: - CLI: clap-based with subcommands (run, setup, status, cache, warmup, bwlimit, speed-test, config-init, log) - Config: TOML-based with env var override, preset templates - rclone: VFS mount args builder, config generator, RC API client - Services: Samba config gen, NFS exports, WebDAV serve args, systemd units - Deploy: dependency checker, filesystem validation, one-click setup - Supervisor: single-process tree managing rclone mount + smbd + WebDAV as child processes — no systemd dependency for protocol management Supervisor hardening: - ProtocolChildren Drop impl prevents orphaned processes on startup failure - Early rclone exit detection in mount wait loop (fail fast, not 30s timeout) - Graceful SIGTERM → 3s grace → SIGKILL (prevents orphaned smbd workers) - RestartTracker with 5-min stability reset and linear backoff (2s/4s/6s) - Shutdown signal checked during mount wait and before protocol start Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
parent
8f00f86eb4
commit
5d8bf52ae9
1
.gitignore
vendored
Normal file
1
.gitignore
vendored
Normal file
@ -0,0 +1 @@
|
|||||||
|
/warpgate/target/
|
||||||
80
CLAUDE.md
Normal file
80
CLAUDE.md
Normal file
@ -0,0 +1,80 @@
|
|||||||
|
# CLAUDE.md
|
||||||
|
|
||||||
|
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||||
|
|
||||||
|
## Project Overview
|
||||||
|
|
||||||
|
**Warpgate** is an SSD read-write caching proxy that accelerates remote NAS access for photographers, videographers, and remote workers. It sits between client devices and a remote NAS, providing local SSD caching with automatic write-back.
|
||||||
|
|
||||||
|
- **Current state**: Specification/PRD only (`warpgate-prd-v4.md`, written in Chinese). No implementation code exists yet.
|
||||||
|
- **MVP target**: Configuration files + bash deployment scripts for Linux (Ubuntu 22.04+ / Debian 12+)
|
||||||
|
- **Primary user**: Photographers using Lightroom over Tailscale to access a home NAS
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
Clients (macOS/Windows/Linux/iPad)
|
||||||
|
│ SMB / NFS / WebDAV
|
||||||
|
▼
|
||||||
|
Warpgate Proxy (local network)
|
||||||
|
├─ Samba Server (SMB2/3 — primary, for Lightroom/Finder/Explorer)
|
||||||
|
├─ NFS Server (Linux clients)
|
||||||
|
├─ WebDAV Server (mobile)
|
||||||
|
└─ rclone VFS FUSE mount (/mnt/nas-photos)
|
||||||
|
└─ SSD Cache (rclone-managed LRU, dirty file protection)
|
||||||
|
│ SFTP over Tailscale/WireGuard
|
||||||
|
▼
|
||||||
|
Remote NAS (any brand supporting SFTP)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key design decisions:**
|
||||||
|
- All protocols share a single rclone FUSE mount point — one cache layer, no duplication
|
||||||
|
- rclone VFS with `--vfs-cache-mode full` handles both read-through caching and async write-back
|
||||||
|
- **Single-direction write constraint**: NAS doesn't change while user is away, eliminating conflict resolution
|
||||||
|
- Remote change detection uses rclone's `--dir-cache-time` (no brand-specific NAS APIs)
|
||||||
|
- Cache filesystem should be btrfs or ZFS (CoW + journal for crash consistency)
|
||||||
|
|
||||||
|
## Core Technologies
|
||||||
|
|
||||||
|
- **rclone** v1.65+ — VFS FUSE mount, read-through cache, async write-back, LRU eviction, RC API
|
||||||
|
- **Samba** 4.x — SMB shares for macOS/Windows clients
|
||||||
|
- **nfs-kernel-server** — NFS exports for Linux clients
|
||||||
|
- **FUSE** 3.x — userspace filesystem for rclone mount
|
||||||
|
- **Tailscale/WireGuard** — secure tunnel to remote NAS via SFTP
|
||||||
|
|
||||||
|
## PRD Structure (warpgate-prd-v4.md)
|
||||||
|
|
||||||
|
The PRD is comprehensive (836 lines, Chinese) with these sections:
|
||||||
|
- Sections 1-2: Product positioning, target users
|
||||||
|
- Section 3: System architecture with Mermaid diagrams
|
||||||
|
- Section 4: Features organized by priority (P0=MVP, P1=important, P2=future)
|
||||||
|
- Section 5: Data consistency model and scenario walkthroughs
|
||||||
|
- Sections 6-7: Remote change detection, cache behavior details
|
||||||
|
- Section 8: Full configuration parameter reference
|
||||||
|
- Section 9: Preset templates (photographer, video editor, office)
|
||||||
|
- Sections 10-11: Deployment requirements, risks
|
||||||
|
- Sections 12-15: Roadmap, paid services, anti-scope, glossary
|
||||||
|
|
||||||
|
## Feature Priority
|
||||||
|
|
||||||
|
- **P0 (MVP)**: Transparent multi-protocol proxy, read-through cache, cache consistency, remote change detection, cache space management, one-click deployment
|
||||||
|
- **P1**: WiFi setup AP + captive portal proxy, cache warm-up, CLI tools (`warpgate status/cache-list/warmup/bwlimit/...`), adaptive bandwidth throttling, connection resilience
|
||||||
|
- **P2**: WiFi AP sharing, web UI, NAS-side agent push, multi-NAS support, smart cache policies, Docker image, notifications
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
All behavior driven by environment variables — key groups:
|
||||||
|
- Connection: `NAS_HOST`, `NAS_USER`, `NAS_PASS`/`NAS_KEY_FILE`, `NAS_REMOTE_PATH`
|
||||||
|
- Cache: `CACHE_DIR`, `CACHE_MAX_SIZE`, `CACHE_MAX_AGE`, `CACHE_MIN_FREE`
|
||||||
|
- Read tuning: `READ_CHUNK_SIZE`, `READ_AHEAD`, `BUFFER_SIZE`
|
||||||
|
- Write-back: `VFS_WRITE_BACK` (default 5s), `TRANSFERS`
|
||||||
|
- Bandwidth: `BW_LIMIT_UP`, `BW_LIMIT_DOWN`, `BW_ADAPTIVE`
|
||||||
|
- Protocols: `ENABLE_SMB`, `ENABLE_NFS`, `ENABLE_WEBDAV`
|
||||||
|
- Directory cache: `DIR_CACHE_TIME`
|
||||||
|
|
||||||
|
## Language & Conventions
|
||||||
|
|
||||||
|
- PRD and product context are in **Simplified Chinese**
|
||||||
|
- Product name is **Warpgate** (English)
|
||||||
|
- NAS brand-agnostic: must work with Synology, QNAP, TrueNAS, DIY — SFTP only, no vendor APIs
|
||||||
|
- Deployment targets Linux only; clients are cross-platform
|
||||||
1126
warpgate/Cargo.lock
generated
Normal file
1126
warpgate/Cargo.lock
generated
Normal file
File diff suppressed because it is too large
Load Diff
15
warpgate/Cargo.toml
Normal file
15
warpgate/Cargo.toml
Normal file
@ -0,0 +1,15 @@
|
|||||||
|
[package]
|
||||||
|
name = "warpgate"
|
||||||
|
version = "0.1.0"
|
||||||
|
edition = "2024"
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
anyhow = "1.0.101"
|
||||||
|
clap = { version = "4.5.59", features = ["derive"] }
|
||||||
|
serde = { version = "1.0.228", features = ["derive"] }
|
||||||
|
serde_json = "1.0.149"
|
||||||
|
thiserror = "2.0.18"
|
||||||
|
toml = "1.0.2"
|
||||||
|
ctrlc = "3.4"
|
||||||
|
libc = "0.2"
|
||||||
|
ureq = { version = "3.2.0", features = ["json"] }
|
||||||
59
warpgate/src/cli/bwlimit.rs
Normal file
59
warpgate/src/cli/bwlimit.rs
Normal file
@ -0,0 +1,59 @@
|
|||||||
|
//! `warpgate bwlimit` — view or adjust bandwidth limits at runtime.
|
||||||
|
|
||||||
|
use anyhow::{Context, Result};
|
||||||
|
|
||||||
|
use crate::config::Config;
|
||||||
|
use crate::rclone::rc;
|
||||||
|
|
||||||
|
pub fn run(_config: &Config, up: Option<&str>, down: Option<&str>) -> Result<()> {
|
||||||
|
let result = rc::bwlimit(up, down).context("Failed to call rclone bwlimit API")?;
|
||||||
|
|
||||||
|
if up.is_none() && down.is_none() {
|
||||||
|
println!("Current bandwidth limits:");
|
||||||
|
} else {
|
||||||
|
println!("Updated bandwidth limits:");
|
||||||
|
}
|
||||||
|
|
||||||
|
// rclone core/bwlimit returns { "bytesPerSecond": N, "bytesPerSecondTx": N, "bytesPerSecondRx": N }
|
||||||
|
// A value of -1 means unlimited.
|
||||||
|
let has_fields = result.get("bytesPerSecondTx").is_some();
|
||||||
|
|
||||||
|
if has_fields {
|
||||||
|
if let Some(tx) = result.get("bytesPerSecondTx").and_then(|v| v.as_i64()) {
|
||||||
|
if tx < 0 {
|
||||||
|
println!(" Upload: unlimited");
|
||||||
|
} else {
|
||||||
|
println!(" Upload: {}/s", format_bytes(tx as u64));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if let Some(rx) = result.get("bytesPerSecondRx").and_then(|v| v.as_i64()) {
|
||||||
|
if rx < 0 {
|
||||||
|
println!(" Download: unlimited");
|
||||||
|
} else {
|
||||||
|
println!(" Download: {}/s", format_bytes(rx as u64));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Fallback: print raw response
|
||||||
|
println!("{}", serde_json::to_string_pretty(&result)?);
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn format_bytes(bytes: u64) -> String {
|
||||||
|
const KIB: f64 = 1024.0;
|
||||||
|
const MIB: f64 = KIB * 1024.0;
|
||||||
|
const GIB: f64 = MIB * 1024.0;
|
||||||
|
|
||||||
|
let b = bytes as f64;
|
||||||
|
if b >= GIB {
|
||||||
|
format!("{:.1} GiB", b / GIB)
|
||||||
|
} else if b >= MIB {
|
||||||
|
format!("{:.1} MiB", b / MIB)
|
||||||
|
} else if b >= KIB {
|
||||||
|
format!("{:.1} KiB", b / KIB)
|
||||||
|
} else {
|
||||||
|
format!("{} B", bytes)
|
||||||
|
}
|
||||||
|
}
|
||||||
90
warpgate/src/cli/cache.rs
Normal file
90
warpgate/src/cli/cache.rs
Normal file
@ -0,0 +1,90 @@
|
|||||||
|
//! `warpgate cache-list` and `warpgate cache-clean` commands.
|
||||||
|
|
||||||
|
use anyhow::{Context, Result};
|
||||||
|
|
||||||
|
use crate::config::Config;
|
||||||
|
use crate::rclone::rc;
|
||||||
|
|
||||||
|
/// List cached files via rclone RC API.
|
||||||
|
pub fn list(_config: &Config) -> Result<()> {
|
||||||
|
let result = rc::vfs_list("/").context("Failed to list VFS cache")?;
|
||||||
|
|
||||||
|
// vfs/list may return an array directly or { "list": [...] }
|
||||||
|
let entries = if let Some(arr) = result.as_array() {
|
||||||
|
arr.as_slice()
|
||||||
|
} else if let Some(list) = result.get("list").and_then(|v| v.as_array()) {
|
||||||
|
list.as_slice()
|
||||||
|
} else {
|
||||||
|
// Unknown format — print raw JSON
|
||||||
|
println!("{}", serde_json::to_string_pretty(&result)?);
|
||||||
|
return Ok(());
|
||||||
|
};
|
||||||
|
|
||||||
|
if entries.is_empty() {
|
||||||
|
println!("Cache is empty.");
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
println!("{:<10} PATH", "SIZE");
|
||||||
|
println!("{}", "-".repeat(60));
|
||||||
|
|
||||||
|
for entry in entries {
|
||||||
|
let name = entry.get("Name").and_then(|v| v.as_str()).unwrap_or("?");
|
||||||
|
let size = entry.get("Size").and_then(|v| v.as_u64()).unwrap_or(0);
|
||||||
|
let is_dir = entry
|
||||||
|
.get("IsDir")
|
||||||
|
.and_then(|v| v.as_bool())
|
||||||
|
.unwrap_or(false);
|
||||||
|
|
||||||
|
if is_dir {
|
||||||
|
println!("{:<10} {}/", "<dir>", name);
|
||||||
|
} else {
|
||||||
|
println!("{:<10} {}", format_bytes(size), name);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Clean cached files (only clean files, never dirty).
|
||||||
|
pub fn clean(_config: &Config, all: bool) -> Result<()> {
|
||||||
|
if all {
|
||||||
|
println!("Clearing VFS directory cache...");
|
||||||
|
rc::vfs_forget("/").context("Failed to clear VFS cache")?;
|
||||||
|
println!("Done. VFS directory cache cleared.");
|
||||||
|
} else {
|
||||||
|
println!("Current cache status:");
|
||||||
|
match rc::vfs_stats() {
|
||||||
|
Ok(vfs) => {
|
||||||
|
if let Some(dc) = vfs.disk_cache {
|
||||||
|
println!(" Used: {}", format_bytes(dc.bytes_used));
|
||||||
|
println!(" Uploading: {}", dc.uploads_in_progress);
|
||||||
|
println!(" Queued: {}", dc.uploads_queued);
|
||||||
|
if dc.uploads_in_progress > 0 || dc.uploads_queued > 0 {
|
||||||
|
println!("\n Dirty files exist — only synced files are safe to clean.");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => eprintln!(" Could not fetch cache stats: {}", e),
|
||||||
|
}
|
||||||
|
println!("\nRun with --all to clear the directory cache.");
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn format_bytes(bytes: u64) -> String {
|
||||||
|
const KIB: f64 = 1024.0;
|
||||||
|
const MIB: f64 = KIB * 1024.0;
|
||||||
|
const GIB: f64 = MIB * 1024.0;
|
||||||
|
|
||||||
|
let b = bytes as f64;
|
||||||
|
if b >= GIB {
|
||||||
|
format!("{:.1} GiB", b / GIB)
|
||||||
|
} else if b >= MIB {
|
||||||
|
format!("{:.1} MiB", b / MIB)
|
||||||
|
} else if b >= KIB {
|
||||||
|
format!("{:.1} KiB", b / KIB)
|
||||||
|
} else {
|
||||||
|
format!("{} B", bytes)
|
||||||
|
}
|
||||||
|
}
|
||||||
21
warpgate/src/cli/config_init.rs
Normal file
21
warpgate/src/cli/config_init.rs
Normal file
@ -0,0 +1,21 @@
|
|||||||
|
//! `warpgate config-init` — generate a default config file.
|
||||||
|
|
||||||
|
use std::path::PathBuf;
|
||||||
|
|
||||||
|
use anyhow::Result;
|
||||||
|
|
||||||
|
use crate::config::Config;
|
||||||
|
|
||||||
|
pub fn run(output: Option<PathBuf>) -> Result<()> {
|
||||||
|
let path = output.unwrap_or_else(|| PathBuf::from(crate::config::DEFAULT_CONFIG_PATH));
|
||||||
|
let content = Config::default_toml();
|
||||||
|
if path.exists() {
|
||||||
|
anyhow::bail!("Config file already exists: {}. Remove it first.", path.display());
|
||||||
|
}
|
||||||
|
if let Some(parent) = path.parent() {
|
||||||
|
std::fs::create_dir_all(parent)?;
|
||||||
|
}
|
||||||
|
std::fs::write(&path, content)?;
|
||||||
|
println!("Config written to {}", path.display());
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
33
warpgate/src/cli/log.rs
Normal file
33
warpgate/src/cli/log.rs
Normal file
@ -0,0 +1,33 @@
|
|||||||
|
//! `warpgate log` — stream service logs in real time.
|
||||||
|
|
||||||
|
use std::process::Command;
|
||||||
|
|
||||||
|
use anyhow::{Context, Result};
|
||||||
|
|
||||||
|
use crate::config::Config;
|
||||||
|
|
||||||
|
pub fn run(_config: &Config, lines: u32, follow: bool) -> Result<()> {
|
||||||
|
let mut cmd = Command::new("journalctl");
|
||||||
|
cmd.arg("-u")
|
||||||
|
.arg("warpgate-mount")
|
||||||
|
.arg("-n")
|
||||||
|
.arg(lines.to_string());
|
||||||
|
|
||||||
|
if follow {
|
||||||
|
// Stream directly to stdout with -f (like tail -f)
|
||||||
|
cmd.arg("-f");
|
||||||
|
let status = cmd.status().context("Failed to run journalctl")?;
|
||||||
|
if !status.success() {
|
||||||
|
anyhow::bail!("journalctl exited with status {}", status);
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
let output = cmd.output().context("Failed to run journalctl")?;
|
||||||
|
if !output.status.success() {
|
||||||
|
let stderr = String::from_utf8_lossy(&output.stderr);
|
||||||
|
anyhow::bail!("journalctl failed: {}", stderr.trim());
|
||||||
|
}
|
||||||
|
print!("{}", String::from_utf8_lossy(&output.stdout));
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
7
warpgate/src/cli/mod.rs
Normal file
7
warpgate/src/cli/mod.rs
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
pub mod bwlimit;
|
||||||
|
pub mod cache;
|
||||||
|
pub mod config_init;
|
||||||
|
pub mod log;
|
||||||
|
pub mod speed_test;
|
||||||
|
pub mod status;
|
||||||
|
pub mod warmup;
|
||||||
117
warpgate/src/cli/speed_test.rs
Normal file
117
warpgate/src/cli/speed_test.rs
Normal file
@ -0,0 +1,117 @@
|
|||||||
|
//! `warpgate speed-test` — test network speed to remote NAS.
|
||||||
|
|
||||||
|
use std::io::Write;
|
||||||
|
use std::process::Command;
|
||||||
|
use std::time::Instant;
|
||||||
|
|
||||||
|
use anyhow::{Context, Result};
|
||||||
|
|
||||||
|
use crate::config::Config;
|
||||||
|
use crate::rclone::config as rclone_config;
|
||||||
|
|
||||||
|
const TEST_SIZE: usize = 10 * 1024 * 1024; // 10 MiB
|
||||||
|
|
||||||
|
pub fn run(config: &Config) -> Result<()> {
|
||||||
|
let tmp_local = std::env::temp_dir().join("warpgate-speedtest");
|
||||||
|
let remote_path = format!(
|
||||||
|
"nas:{}/.warpgate-speedtest",
|
||||||
|
config.connection.remote_path
|
||||||
|
);
|
||||||
|
|
||||||
|
// Create a 10 MiB test file
|
||||||
|
println!("Creating 10 MiB test file...");
|
||||||
|
{
|
||||||
|
let mut f = std::fs::File::create(&tmp_local)
|
||||||
|
.context("Failed to create temp test file")?;
|
||||||
|
let buf = vec![0x42u8; TEST_SIZE];
|
||||||
|
f.write_all(&buf)?;
|
||||||
|
f.flush()?;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Upload test
|
||||||
|
println!("Testing upload speed...");
|
||||||
|
let start = Instant::now();
|
||||||
|
let status = Command::new("rclone")
|
||||||
|
.arg("copyto")
|
||||||
|
.arg("--config")
|
||||||
|
.arg(rclone_config::RCLONE_CONF_PATH)
|
||||||
|
.arg(&tmp_local)
|
||||||
|
.arg(&remote_path)
|
||||||
|
.status()
|
||||||
|
.context("Failed to run rclone copyto for upload")?;
|
||||||
|
|
||||||
|
let upload_elapsed = start.elapsed();
|
||||||
|
if !status.success() {
|
||||||
|
cleanup(&tmp_local, &remote_path);
|
||||||
|
anyhow::bail!("Upload failed with exit code {}", status);
|
||||||
|
}
|
||||||
|
|
||||||
|
let upload_speed = TEST_SIZE as f64 / upload_elapsed.as_secs_f64();
|
||||||
|
println!(
|
||||||
|
"Upload: {}/s ({:.1}s)",
|
||||||
|
format_bytes(upload_speed as u64),
|
||||||
|
upload_elapsed.as_secs_f64()
|
||||||
|
);
|
||||||
|
|
||||||
|
// Remove local file before download test
|
||||||
|
let _ = std::fs::remove_file(&tmp_local);
|
||||||
|
|
||||||
|
// Download test
|
||||||
|
println!("Testing download speed...");
|
||||||
|
let start = Instant::now();
|
||||||
|
let status = Command::new("rclone")
|
||||||
|
.arg("copyto")
|
||||||
|
.arg("--config")
|
||||||
|
.arg(rclone_config::RCLONE_CONF_PATH)
|
||||||
|
.arg(&remote_path)
|
||||||
|
.arg(&tmp_local)
|
||||||
|
.status()
|
||||||
|
.context("Failed to run rclone copyto for download")?;
|
||||||
|
|
||||||
|
let download_elapsed = start.elapsed();
|
||||||
|
if !status.success() {
|
||||||
|
cleanup(&tmp_local, &remote_path);
|
||||||
|
anyhow::bail!("Download failed with exit code {}", status);
|
||||||
|
}
|
||||||
|
|
||||||
|
let download_speed = TEST_SIZE as f64 / download_elapsed.as_secs_f64();
|
||||||
|
println!(
|
||||||
|
"Download: {}/s ({:.1}s)",
|
||||||
|
format_bytes(download_speed as u64),
|
||||||
|
download_elapsed.as_secs_f64()
|
||||||
|
);
|
||||||
|
|
||||||
|
// Cleanup
|
||||||
|
cleanup(&tmp_local, &remote_path);
|
||||||
|
println!("Done.");
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Remove local and remote temp files.
|
||||||
|
fn cleanup(local: &std::path::Path, remote: &str) {
|
||||||
|
let _ = std::fs::remove_file(local);
|
||||||
|
let _ = Command::new("rclone")
|
||||||
|
.arg("deletefile")
|
||||||
|
.arg("--config")
|
||||||
|
.arg(rclone_config::RCLONE_CONF_PATH)
|
||||||
|
.arg(remote)
|
||||||
|
.status();
|
||||||
|
}
|
||||||
|
|
||||||
|
fn format_bytes(bytes: u64) -> String {
|
||||||
|
const KIB: f64 = 1024.0;
|
||||||
|
const MIB: f64 = KIB * 1024.0;
|
||||||
|
const GIB: f64 = MIB * 1024.0;
|
||||||
|
|
||||||
|
let b = bytes as f64;
|
||||||
|
if b >= GIB {
|
||||||
|
format!("{:.1} GiB", b / GIB)
|
||||||
|
} else if b >= MIB {
|
||||||
|
format!("{:.1} MiB", b / MIB)
|
||||||
|
} else if b >= KIB {
|
||||||
|
format!("{:.1} KiB", b / KIB)
|
||||||
|
} else {
|
||||||
|
format!("{} B", bytes)
|
||||||
|
}
|
||||||
|
}
|
||||||
72
warpgate/src/cli/status.rs
Normal file
72
warpgate/src/cli/status.rs
Normal file
@ -0,0 +1,72 @@
|
|||||||
|
//! `warpgate status` — show service status, cache stats, write-back queue, bandwidth.
|
||||||
|
|
||||||
|
use anyhow::Result;
|
||||||
|
|
||||||
|
use crate::config::Config;
|
||||||
|
use crate::rclone::{mount, rc};
|
||||||
|
|
||||||
|
pub fn run(config: &Config) -> Result<()> {
|
||||||
|
// Check mount status
|
||||||
|
let mounted = match mount::is_mounted(config) {
|
||||||
|
Ok(m) => m,
|
||||||
|
Err(e) => {
|
||||||
|
eprintln!("Warning: could not check mount status: {}", e);
|
||||||
|
false
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
if mounted {
|
||||||
|
println!("Mount: UP ({})", config.mount.point.display());
|
||||||
|
} else {
|
||||||
|
println!("Mount: DOWN");
|
||||||
|
println!("\nrclone VFS mount is not active.");
|
||||||
|
println!("Start with: systemctl start warpgate-mount");
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Transfer stats from rclone RC API
|
||||||
|
match rc::core_stats() {
|
||||||
|
Ok(stats) => {
|
||||||
|
println!("Speed: {}/s", format_bytes(stats.speed as u64));
|
||||||
|
println!("Moved: {}", format_bytes(stats.bytes));
|
||||||
|
println!("Active: {} transfers", stats.transfers);
|
||||||
|
println!("Errors: {}", stats.errors);
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
eprintln!("Could not reach rclone RC API: {}", e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// VFS cache stats (RC connection error already reported above)
|
||||||
|
if let Ok(vfs) = rc::vfs_stats() {
|
||||||
|
if let Some(dc) = vfs.disk_cache {
|
||||||
|
println!("Cache: {}", format_bytes(dc.bytes_used));
|
||||||
|
println!(
|
||||||
|
"Dirty: {} uploading, {} queued",
|
||||||
|
dc.uploads_in_progress, dc.uploads_queued
|
||||||
|
);
|
||||||
|
if dc.errored_files > 0 {
|
||||||
|
println!("Errored: {} files", dc.errored_files);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn format_bytes(bytes: u64) -> String {
|
||||||
|
const KIB: f64 = 1024.0;
|
||||||
|
const MIB: f64 = KIB * 1024.0;
|
||||||
|
const GIB: f64 = MIB * 1024.0;
|
||||||
|
|
||||||
|
let b = bytes as f64;
|
||||||
|
if b >= GIB {
|
||||||
|
format!("{:.1} GiB", b / GIB)
|
||||||
|
} else if b >= MIB {
|
||||||
|
format!("{:.1} MiB", b / MIB)
|
||||||
|
} else if b >= KIB {
|
||||||
|
format!("{:.1} KiB", b / KIB)
|
||||||
|
} else {
|
||||||
|
format!("{} B", bytes)
|
||||||
|
}
|
||||||
|
}
|
||||||
45
warpgate/src/cli/warmup.rs
Normal file
45
warpgate/src/cli/warmup.rs
Normal file
@ -0,0 +1,45 @@
|
|||||||
|
//! `warpgate warmup` — pre-cache a remote directory to local SSD.
|
||||||
|
|
||||||
|
use std::process::Command;
|
||||||
|
|
||||||
|
use anyhow::{Context, Result};
|
||||||
|
|
||||||
|
use crate::config::Config;
|
||||||
|
use crate::rclone::config as rclone_config;
|
||||||
|
|
||||||
|
pub fn run(config: &Config, path: &str, newer_than: Option<&str>) -> Result<()> {
|
||||||
|
let remote_src = format!("nas:{}/{}", config.connection.remote_path, path);
|
||||||
|
let local_dest = std::env::temp_dir().join("warpgate-warmup");
|
||||||
|
|
||||||
|
println!("Warming up: {}", remote_src);
|
||||||
|
|
||||||
|
// Create temp destination for downloaded files
|
||||||
|
std::fs::create_dir_all(&local_dest)
|
||||||
|
.context("Failed to create temp directory for warmup")?;
|
||||||
|
|
||||||
|
let mut cmd = Command::new("rclone");
|
||||||
|
cmd.arg("copy")
|
||||||
|
.arg("--config")
|
||||||
|
.arg(rclone_config::RCLONE_CONF_PATH)
|
||||||
|
.arg(&remote_src)
|
||||||
|
.arg(&local_dest)
|
||||||
|
.arg("--no-traverse")
|
||||||
|
.arg("--progress");
|
||||||
|
|
||||||
|
if let Some(age) = newer_than {
|
||||||
|
cmd.arg("--max-age").arg(age);
|
||||||
|
}
|
||||||
|
|
||||||
|
println!("Downloading from remote NAS...");
|
||||||
|
let status = cmd.status().context("Failed to run rclone copy")?;
|
||||||
|
|
||||||
|
// Clean up temp directory
|
||||||
|
let _ = std::fs::remove_dir_all(&local_dest);
|
||||||
|
|
||||||
|
if status.success() {
|
||||||
|
println!("Warmup complete.");
|
||||||
|
Ok(())
|
||||||
|
} else {
|
||||||
|
anyhow::bail!("rclone copy exited with status {}", status);
|
||||||
|
}
|
||||||
|
}
|
||||||
213
warpgate/src/config.rs
Normal file
213
warpgate/src/config.rs
Normal file
@ -0,0 +1,213 @@
|
|||||||
|
//! Warpgate configuration schema.
|
||||||
|
//!
|
||||||
|
//! Maps to `/etc/warpgate/config.toml`. All fields mirror the PRD Section 8 parameter list.
|
||||||
|
//! Environment variables can override config file values (prefixed with `WARPGATE_`).
|
||||||
|
|
||||||
|
use std::path::{Path, PathBuf};
|
||||||
|
|
||||||
|
use anyhow::{Context, Result};
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
|
||||||
|
/// Default config file path.
|
||||||
|
pub const DEFAULT_CONFIG_PATH: &str = "/etc/warpgate/config.toml";
|
||||||
|
|
||||||
|
/// Top-level configuration.
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct Config {
|
||||||
|
pub connection: ConnectionConfig,
|
||||||
|
pub cache: CacheConfig,
|
||||||
|
pub read: ReadConfig,
|
||||||
|
pub bandwidth: BandwidthConfig,
|
||||||
|
pub writeback: WritebackConfig,
|
||||||
|
pub directory_cache: DirectoryCacheConfig,
|
||||||
|
pub protocols: ProtocolsConfig,
|
||||||
|
pub mount: MountConfig,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// SFTP connection to remote NAS.
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct ConnectionConfig {
|
||||||
|
/// Remote NAS Tailscale IP or hostname.
|
||||||
|
pub nas_host: String,
|
||||||
|
/// SFTP username.
|
||||||
|
pub nas_user: String,
|
||||||
|
/// SFTP password (prefer key_file).
|
||||||
|
#[serde(default)]
|
||||||
|
pub nas_pass: Option<String>,
|
||||||
|
/// Path to SSH private key.
|
||||||
|
#[serde(default)]
|
||||||
|
pub nas_key_file: Option<String>,
|
||||||
|
/// Target path on NAS.
|
||||||
|
pub remote_path: String,
|
||||||
|
/// SFTP port.
|
||||||
|
#[serde(default = "default_sftp_port")]
|
||||||
|
pub sftp_port: u16,
|
||||||
|
/// SFTP connection pool size.
|
||||||
|
#[serde(default = "default_sftp_connections")]
|
||||||
|
pub sftp_connections: u32,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// SSD cache settings.
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct CacheConfig {
|
||||||
|
/// Cache storage directory (should be on SSD, prefer btrfs/ZFS).
|
||||||
|
pub dir: PathBuf,
|
||||||
|
/// Max cache size (e.g. "200G").
|
||||||
|
#[serde(default = "default_cache_max_size")]
|
||||||
|
pub max_size: String,
|
||||||
|
/// Max cache retention time (e.g. "720h").
|
||||||
|
#[serde(default = "default_cache_max_age")]
|
||||||
|
pub max_age: String,
|
||||||
|
/// Minimum free space on cache disk (e.g. "10G").
|
||||||
|
#[serde(default = "default_cache_min_free")]
|
||||||
|
pub min_free: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Read optimization parameters.
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct ReadConfig {
|
||||||
|
/// Chunk size for chunked reads (e.g. "256M").
|
||||||
|
#[serde(default = "default_read_chunk_size")]
|
||||||
|
pub chunk_size: String,
|
||||||
|
/// Max chunk growth limit (e.g. "1G").
|
||||||
|
#[serde(default = "default_read_chunk_limit")]
|
||||||
|
pub chunk_limit: String,
|
||||||
|
/// Read-ahead buffer size (e.g. "512M").
|
||||||
|
#[serde(default = "default_read_ahead")]
|
||||||
|
pub read_ahead: String,
|
||||||
|
/// In-memory buffer size (e.g. "256M").
|
||||||
|
#[serde(default = "default_buffer_size")]
|
||||||
|
pub buffer_size: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Bandwidth control.
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct BandwidthConfig {
|
||||||
|
/// Upload (write-back) speed limit (e.g. "10M", "0" = unlimited).
|
||||||
|
#[serde(default = "default_bw_zero")]
|
||||||
|
pub limit_up: String,
|
||||||
|
/// Download (cache pull) speed limit.
|
||||||
|
#[serde(default = "default_bw_zero")]
|
||||||
|
pub limit_down: String,
|
||||||
|
/// Enable adaptive write-back throttling.
|
||||||
|
#[serde(default = "default_true")]
|
||||||
|
pub adaptive: bool,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Write-back behavior.
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct WritebackConfig {
|
||||||
|
/// Delay before write-back (rclone --vfs-write-back).
|
||||||
|
#[serde(default = "default_write_back")]
|
||||||
|
pub write_back: String,
|
||||||
|
/// Concurrent transfer count (rclone --transfers).
|
||||||
|
#[serde(default = "default_transfers")]
|
||||||
|
pub transfers: u32,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Directory listing cache.
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct DirectoryCacheConfig {
|
||||||
|
/// Directory cache TTL (rclone --dir-cache-time).
|
||||||
|
#[serde(default = "default_dir_cache_time")]
|
||||||
|
pub cache_time: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Multi-protocol service toggles.
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct ProtocolsConfig {
|
||||||
|
/// Enable SMB (Samba) sharing.
|
||||||
|
#[serde(default = "default_true")]
|
||||||
|
pub enable_smb: bool,
|
||||||
|
/// Enable NFS export.
|
||||||
|
#[serde(default)]
|
||||||
|
pub enable_nfs: bool,
|
||||||
|
/// Enable WebDAV service.
|
||||||
|
#[serde(default)]
|
||||||
|
pub enable_webdav: bool,
|
||||||
|
/// NFS allowed network CIDR.
|
||||||
|
#[serde(default = "default_nfs_network")]
|
||||||
|
pub nfs_allowed_network: String,
|
||||||
|
/// WebDAV listen port.
|
||||||
|
#[serde(default = "default_webdav_port")]
|
||||||
|
pub webdav_port: u16,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// FUSE mount point configuration.
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||||
|
pub struct MountConfig {
|
||||||
|
/// FUSE mount point path.
|
||||||
|
#[serde(default = "default_mount_point")]
|
||||||
|
pub point: PathBuf,
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Default value functions ---
|
||||||
|
|
||||||
|
fn default_sftp_port() -> u16 {
|
||||||
|
22
|
||||||
|
}
|
||||||
|
fn default_sftp_connections() -> u32 {
|
||||||
|
8
|
||||||
|
}
|
||||||
|
fn default_cache_max_size() -> String {
|
||||||
|
"200G".into()
|
||||||
|
}
|
||||||
|
fn default_cache_max_age() -> String {
|
||||||
|
"720h".into()
|
||||||
|
}
|
||||||
|
fn default_cache_min_free() -> String {
|
||||||
|
"10G".into()
|
||||||
|
}
|
||||||
|
fn default_read_chunk_size() -> String {
|
||||||
|
"256M".into()
|
||||||
|
}
|
||||||
|
fn default_read_chunk_limit() -> String {
|
||||||
|
"1G".into()
|
||||||
|
}
|
||||||
|
fn default_read_ahead() -> String {
|
||||||
|
"512M".into()
|
||||||
|
}
|
||||||
|
fn default_buffer_size() -> String {
|
||||||
|
"256M".into()
|
||||||
|
}
|
||||||
|
fn default_bw_zero() -> String {
|
||||||
|
"0".into()
|
||||||
|
}
|
||||||
|
fn default_true() -> bool {
|
||||||
|
true
|
||||||
|
}
|
||||||
|
fn default_write_back() -> String {
|
||||||
|
"5s".into()
|
||||||
|
}
|
||||||
|
fn default_transfers() -> u32 {
|
||||||
|
4
|
||||||
|
}
|
||||||
|
fn default_dir_cache_time() -> String {
|
||||||
|
"1h".into()
|
||||||
|
}
|
||||||
|
fn default_nfs_network() -> String {
|
||||||
|
"192.168.0.0/24".into()
|
||||||
|
}
|
||||||
|
fn default_webdav_port() -> u16 {
|
||||||
|
8080
|
||||||
|
}
|
||||||
|
fn default_mount_point() -> PathBuf {
|
||||||
|
PathBuf::from("/mnt/nas-photos")
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Config {
|
||||||
|
/// Load config from a TOML file.
|
||||||
|
pub fn load(path: &Path) -> Result<Self> {
|
||||||
|
let content = std::fs::read_to_string(path)
|
||||||
|
.with_context(|| format!("Failed to read config file: {}", path.display()))?;
|
||||||
|
let config: Config =
|
||||||
|
toml::from_str(&content).with_context(|| "Failed to parse config TOML")?;
|
||||||
|
Ok(config)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generate a default config TOML string with comments.
|
||||||
|
pub fn default_toml() -> String {
|
||||||
|
include_str!("../templates/config.toml.default")
|
||||||
|
.to_string()
|
||||||
|
}
|
||||||
|
}
|
||||||
70
warpgate/src/deploy/deps.rs
Normal file
70
warpgate/src/deploy/deps.rs
Normal file
@ -0,0 +1,70 @@
|
|||||||
|
//! Dependency detection and installation.
|
||||||
|
|
||||||
|
use std::process::Command;
|
||||||
|
|
||||||
|
use anyhow::{Context, Result};
|
||||||
|
|
||||||
|
/// Required system dependencies (binary names).
|
||||||
|
pub const REQUIRED_DEPS: &[&str] = &["rclone", "smbd", "fusermount3"];
|
||||||
|
|
||||||
|
/// Optional dependencies (only if enabled in config).
|
||||||
|
pub const OPTIONAL_DEPS: &[(&str, &str)] = &[
|
||||||
|
("exportfs", "nfs-kernel-server"), // NFS
|
||||||
|
];
|
||||||
|
|
||||||
|
/// Check which required dependencies are missing.
|
||||||
|
///
|
||||||
|
/// Runs `which <binary>` for each entry in [`REQUIRED_DEPS`] and returns the
|
||||||
|
/// names of binaries that could not be found.
|
||||||
|
pub fn check_missing() -> Vec<String> {
|
||||||
|
REQUIRED_DEPS
|
||||||
|
.iter()
|
||||||
|
.filter(|bin| {
|
||||||
|
!Command::new("which")
|
||||||
|
.arg(bin)
|
||||||
|
.stdout(std::process::Stdio::null())
|
||||||
|
.stderr(std::process::Stdio::null())
|
||||||
|
.status()
|
||||||
|
.map(|s| s.success())
|
||||||
|
.unwrap_or(false)
|
||||||
|
})
|
||||||
|
.map(|bin| (*bin).to_string())
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Map a binary name to its corresponding apt package name.
|
||||||
|
fn binary_to_package(binary: &str) -> &str {
|
||||||
|
match binary {
|
||||||
|
"rclone" => "rclone",
|
||||||
|
"smbd" => "samba",
|
||||||
|
"fusermount3" => "fuse3",
|
||||||
|
"exportfs" => "nfs-kernel-server",
|
||||||
|
other => other,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Install missing dependencies via apt.
|
||||||
|
///
|
||||||
|
/// Takes a list of missing **binary names** (as returned by [`check_missing`]),
|
||||||
|
/// maps each to the correct apt package, and runs `apt-get install -y`.
|
||||||
|
pub fn install_missing(binaries: &[String]) -> Result<()> {
|
||||||
|
if binaries.is_empty() {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
let packages: Vec<&str> = binaries.iter().map(|b| binary_to_package(b)).collect();
|
||||||
|
|
||||||
|
println!("Installing packages: {}", packages.join(", "));
|
||||||
|
|
||||||
|
let status = Command::new("apt-get")
|
||||||
|
.args(["install", "-y"])
|
||||||
|
.args(&packages)
|
||||||
|
.status()
|
||||||
|
.context("Failed to run apt-get install")?;
|
||||||
|
|
||||||
|
if !status.success() {
|
||||||
|
anyhow::bail!("apt-get install failed with exit code: {}", status);
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
66
warpgate/src/deploy/fs_check.rs
Normal file
66
warpgate/src/deploy/fs_check.rs
Normal file
@ -0,0 +1,66 @@
|
|||||||
|
//! Filesystem detection for cache directory.
|
||||||
|
|
||||||
|
use std::path::Path;
|
||||||
|
|
||||||
|
use anyhow::{Context, Result};
|
||||||
|
|
||||||
|
/// Detect the filesystem type of the given path by reading `/proc/mounts`.
|
||||||
|
///
|
||||||
|
/// Parses each line of `/proc/mounts` (format: `device mount_point fs_type options dump pass`)
|
||||||
|
/// and finds the mount entry whose mount point is the longest prefix of `path`.
|
||||||
|
/// Returns the filesystem type string (e.g. "ext4", "btrfs", "zfs").
|
||||||
|
pub fn detect_fs_type(path: &Path) -> Result<String> {
|
||||||
|
let canonical = path
|
||||||
|
.canonicalize()
|
||||||
|
.unwrap_or_else(|_| path.to_path_buf());
|
||||||
|
let path_str = canonical.to_string_lossy();
|
||||||
|
|
||||||
|
let mounts = std::fs::read_to_string("/proc/mounts")
|
||||||
|
.context("Failed to read /proc/mounts")?;
|
||||||
|
|
||||||
|
let mut best_mount = "";
|
||||||
|
let mut best_fs = String::new();
|
||||||
|
|
||||||
|
for line in mounts.lines() {
|
||||||
|
let fields: Vec<&str> = line.split_whitespace().collect();
|
||||||
|
if fields.len() < 3 {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
let mount_point = fields[1];
|
||||||
|
let fs_type = fields[2];
|
||||||
|
|
||||||
|
// Check if this mount point is a prefix of the target path
|
||||||
|
// and is longer than the current best match.
|
||||||
|
if path_str.starts_with(mount_point) && mount_point.len() > best_mount.len() {
|
||||||
|
best_mount = mount_point;
|
||||||
|
best_fs = fs_type.to_string();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if best_fs.is_empty() {
|
||||||
|
anyhow::bail!("Could not determine filesystem type for {}", path.display());
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(best_fs)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Warn if the cache directory is not on btrfs or ZFS.
|
||||||
|
///
|
||||||
|
/// Calls [`detect_fs_type`] and prints a warning if the filesystem is not
|
||||||
|
/// a copy-on-write filesystem (btrfs or zfs).
|
||||||
|
pub fn warn_if_not_cow(path: &Path) -> Result<()> {
|
||||||
|
match detect_fs_type(path) {
|
||||||
|
Ok(fs_type) => {
|
||||||
|
if fs_type != "btrfs" && fs_type != "zfs" {
|
||||||
|
println!(
|
||||||
|
"WARNING: Cache directory is on {fs_type}. \
|
||||||
|
btrfs or ZFS is recommended for crash consistency."
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
println!("WARNING: Could not detect filesystem type: {e}");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
3
warpgate/src/deploy/mod.rs
Normal file
3
warpgate/src/deploy/mod.rs
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
pub mod deps;
|
||||||
|
pub mod fs_check;
|
||||||
|
pub mod setup;
|
||||||
57
warpgate/src/deploy/setup.rs
Normal file
57
warpgate/src/deploy/setup.rs
Normal file
@ -0,0 +1,57 @@
|
|||||||
|
//! `warpgate deploy` — one-click deployment orchestration.
|
||||||
|
|
||||||
|
use anyhow::{Context, Result};
|
||||||
|
|
||||||
|
use crate::config::Config;
|
||||||
|
use crate::deploy::{deps, fs_check};
|
||||||
|
use crate::rclone;
|
||||||
|
use crate::services::{nfs, samba, systemd, webdav};
|
||||||
|
|
||||||
|
pub fn run(config: &Config) -> Result<()> {
|
||||||
|
// Step 1: Check and install dependencies
|
||||||
|
println!("Checking dependencies...");
|
||||||
|
let missing = deps::check_missing();
|
||||||
|
if missing.is_empty() {
|
||||||
|
println!(" All dependencies satisfied.");
|
||||||
|
} else {
|
||||||
|
println!(" Missing: {}", missing.join(", "));
|
||||||
|
deps::install_missing(&missing)?;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Step 2: Check filesystem type
|
||||||
|
println!("Checking filesystem...");
|
||||||
|
fs_check::warn_if_not_cow(&config.cache.dir)?;
|
||||||
|
|
||||||
|
// Step 3: Create cache directory
|
||||||
|
println!("Creating cache directory...");
|
||||||
|
std::fs::create_dir_all(&config.cache.dir)
|
||||||
|
.with_context(|| format!("Failed to create cache dir: {}", config.cache.dir.display()))?;
|
||||||
|
|
||||||
|
// Step 4: Generate rclone config
|
||||||
|
println!("Generating rclone config...");
|
||||||
|
rclone::config::write_config(config)?;
|
||||||
|
|
||||||
|
// Step 5: Generate service configs based on protocol toggles
|
||||||
|
println!("Generating service configs...");
|
||||||
|
if config.protocols.enable_smb {
|
||||||
|
samba::write_config(config)?;
|
||||||
|
}
|
||||||
|
if config.protocols.enable_nfs {
|
||||||
|
nfs::write_config(config)?;
|
||||||
|
}
|
||||||
|
// WebDAV is served by rclone directly; validate the config is correct.
|
||||||
|
if config.protocols.enable_webdav {
|
||||||
|
let _ = webdav::build_serve_command(config);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Step 6: Install single warpgate.service unit (supervisor mode)
|
||||||
|
println!("Installing warpgate.service...");
|
||||||
|
systemd::install_run_unit(config)?;
|
||||||
|
|
||||||
|
// Step 7: Enable and start the unified service
|
||||||
|
println!("Starting warpgate service...");
|
||||||
|
systemd::enable_and_start_run()?;
|
||||||
|
|
||||||
|
println!("Deployment complete!");
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
125
warpgate/src/main.rs
Normal file
125
warpgate/src/main.rs
Normal file
@ -0,0 +1,125 @@
|
|||||||
|
mod cli;
|
||||||
|
mod config;
|
||||||
|
mod deploy;
|
||||||
|
mod rclone;
|
||||||
|
mod services;
|
||||||
|
mod supervisor;
|
||||||
|
|
||||||
|
use std::path::PathBuf;
|
||||||
|
|
||||||
|
use anyhow::Result;
|
||||||
|
use clap::{Parser, Subcommand};
|
||||||
|
|
||||||
|
use config::Config;
|
||||||
|
|
||||||
|
/// Warpgate — Make your NAS feel local.
|
||||||
|
///
|
||||||
|
/// SSD read-write caching proxy for remote NAS access.
|
||||||
|
#[derive(Parser)]
|
||||||
|
#[command(name = "warpgate", version, about)]
|
||||||
|
struct Cli {
|
||||||
|
/// Path to config file.
|
||||||
|
#[arg(short, long, default_value = config::DEFAULT_CONFIG_PATH)]
|
||||||
|
config: PathBuf,
|
||||||
|
|
||||||
|
#[command(subcommand)]
|
||||||
|
command: Commands,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Subcommand)]
|
||||||
|
enum Commands {
|
||||||
|
/// One-click deploy: install deps, generate configs, enable services.
|
||||||
|
Deploy,
|
||||||
|
/// Show service status, cache stats, write-back queue, bandwidth.
|
||||||
|
Status,
|
||||||
|
/// List files currently in the SSD cache.
|
||||||
|
CacheList,
|
||||||
|
/// Clean cached files (only evicts clean files, never dirty).
|
||||||
|
CacheClean {
|
||||||
|
/// Remove all clean files (default: only expired).
|
||||||
|
#[arg(long)]
|
||||||
|
all: bool,
|
||||||
|
},
|
||||||
|
/// Pre-cache a remote directory to local SSD.
|
||||||
|
Warmup {
|
||||||
|
/// Remote path to warm up (relative to NAS remote_path).
|
||||||
|
path: String,
|
||||||
|
/// Only files newer than this duration (e.g. "7d", "24h").
|
||||||
|
#[arg(long)]
|
||||||
|
newer_than: Option<String>,
|
||||||
|
},
|
||||||
|
/// View or adjust bandwidth limits at runtime.
|
||||||
|
Bwlimit {
|
||||||
|
/// Upload limit (e.g. "10M", "0" for unlimited).
|
||||||
|
#[arg(long)]
|
||||||
|
up: Option<String>,
|
||||||
|
/// Download limit (e.g. "50M", "0" for unlimited).
|
||||||
|
#[arg(long)]
|
||||||
|
down: Option<String>,
|
||||||
|
},
|
||||||
|
/// Stream service logs in real time.
|
||||||
|
Log {
|
||||||
|
/// Number of recent lines to show.
|
||||||
|
#[arg(short, long, default_value = "50")]
|
||||||
|
lines: u32,
|
||||||
|
/// Follow log output (like tail -f).
|
||||||
|
#[arg(short, long)]
|
||||||
|
follow: bool,
|
||||||
|
},
|
||||||
|
/// Test network speed to remote NAS.
|
||||||
|
SpeedTest,
|
||||||
|
/// Run all services under a single supervisor process.
|
||||||
|
Run,
|
||||||
|
/// Generate a default config file.
|
||||||
|
ConfigInit {
|
||||||
|
/// Output path (default: /etc/warpgate/config.toml).
|
||||||
|
#[arg(short, long)]
|
||||||
|
output: Option<PathBuf>,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
fn main() -> Result<()> {
|
||||||
|
let cli = Cli::parse();
|
||||||
|
|
||||||
|
match cli.command {
|
||||||
|
// config-init doesn't need an existing config file
|
||||||
|
Commands::ConfigInit { output } => cli::config_init::run(output),
|
||||||
|
// deploy loads config if it exists, or generates one
|
||||||
|
Commands::Deploy => {
|
||||||
|
let config = load_config_or_default(&cli.config)?;
|
||||||
|
deploy::setup::run(&config)
|
||||||
|
}
|
||||||
|
// all other commands require a valid config
|
||||||
|
cmd => {
|
||||||
|
let config = Config::load(&cli.config)?;
|
||||||
|
match cmd {
|
||||||
|
Commands::Status => cli::status::run(&config),
|
||||||
|
Commands::CacheList => cli::cache::list(&config),
|
||||||
|
Commands::CacheClean { all } => cli::cache::clean(&config, all),
|
||||||
|
Commands::Warmup { path, newer_than } => {
|
||||||
|
cli::warmup::run(&config, &path, newer_than.as_deref())
|
||||||
|
}
|
||||||
|
Commands::Bwlimit { up, down } => {
|
||||||
|
cli::bwlimit::run(&config, up.as_deref(), down.as_deref())
|
||||||
|
}
|
||||||
|
Commands::Log { lines, follow } => cli::log::run(&config, lines, follow),
|
||||||
|
Commands::SpeedTest => cli::speed_test::run(&config),
|
||||||
|
Commands::Run => supervisor::run(&config),
|
||||||
|
// already handled above
|
||||||
|
Commands::ConfigInit { .. } | Commands::Deploy => unreachable!(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Load config from file, or return a useful error.
|
||||||
|
fn load_config_or_default(path: &std::path::Path) -> Result<Config> {
|
||||||
|
if path.exists() {
|
||||||
|
Config::load(path)
|
||||||
|
} else {
|
||||||
|
anyhow::bail!(
|
||||||
|
"Config file not found: {}. Run `warpgate config-init` to generate one.",
|
||||||
|
path.display()
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
73
warpgate/src/rclone/config.rs
Normal file
73
warpgate/src/rclone/config.rs
Normal file
@ -0,0 +1,73 @@
|
|||||||
|
//! Generate rclone.conf from Warpgate config.
|
||||||
|
|
||||||
|
use std::fmt::Write;
|
||||||
|
use std::path::Path;
|
||||||
|
|
||||||
|
use anyhow::{Context, Result};
|
||||||
|
|
||||||
|
use crate::config::Config;
|
||||||
|
|
||||||
|
/// Default path for generated rclone config.
|
||||||
|
pub const RCLONE_CONF_PATH: &str = "/etc/warpgate/rclone.conf";
|
||||||
|
|
||||||
|
/// Generate rclone.conf content for the SFTP remote.
|
||||||
|
///
|
||||||
|
/// Produces an INI-style config with a `[nas]` section containing all SFTP
|
||||||
|
/// connection parameters from the Warpgate config.
|
||||||
|
pub fn generate(config: &Config) -> Result<String> {
|
||||||
|
let conn = &config.connection;
|
||||||
|
let mut conf = String::new();
|
||||||
|
|
||||||
|
writeln!(conf, "[nas]")?;
|
||||||
|
writeln!(conf, "type = sftp")?;
|
||||||
|
writeln!(conf, "host = {}", conn.nas_host)?;
|
||||||
|
writeln!(conf, "user = {}", conn.nas_user)?;
|
||||||
|
writeln!(conf, "port = {}", conn.sftp_port)?;
|
||||||
|
|
||||||
|
if let Some(pass) = &conn.nas_pass {
|
||||||
|
let obscured = obscure_password(pass)?;
|
||||||
|
writeln!(conf, "pass = {obscured}")?;
|
||||||
|
}
|
||||||
|
if let Some(key_file) = &conn.nas_key_file {
|
||||||
|
writeln!(conf, "key_file = {key_file}")?;
|
||||||
|
}
|
||||||
|
|
||||||
|
writeln!(conf, "connections = {}", conn.sftp_connections)?;
|
||||||
|
|
||||||
|
// Disable hash checking — many NAS SFTP servers (e.g. Synology) don't support
|
||||||
|
// running shell commands like md5sum, causing upload verification to fail.
|
||||||
|
writeln!(conf, "disable_hashcheck = true")?;
|
||||||
|
|
||||||
|
Ok(conf)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Obscure a password using `rclone obscure` (required for rclone.conf).
|
||||||
|
fn obscure_password(plain: &str) -> Result<String> {
|
||||||
|
let output = std::process::Command::new("rclone")
|
||||||
|
.args(["obscure", plain])
|
||||||
|
.output()
|
||||||
|
.context("Failed to run `rclone obscure`. Is rclone installed?")?;
|
||||||
|
|
||||||
|
if !output.status.success() {
|
||||||
|
let stderr = String::from_utf8_lossy(&output.stderr);
|
||||||
|
anyhow::bail!("rclone obscure failed: {}", stderr.trim());
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(String::from_utf8_lossy(&output.stdout).trim().to_string())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Write rclone.conf to disk at [`RCLONE_CONF_PATH`].
|
||||||
|
pub fn write_config(config: &Config) -> Result<()> {
|
||||||
|
let content = generate(config)?;
|
||||||
|
let path = Path::new(RCLONE_CONF_PATH);
|
||||||
|
|
||||||
|
if let Some(parent) = path.parent() {
|
||||||
|
std::fs::create_dir_all(parent)
|
||||||
|
.with_context(|| format!("Failed to create directory: {}", parent.display()))?;
|
||||||
|
}
|
||||||
|
|
||||||
|
std::fs::write(path, &content)
|
||||||
|
.with_context(|| format!("Failed to write rclone config: {}", path.display()))?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
3
warpgate/src/rclone/mod.rs
Normal file
3
warpgate/src/rclone/mod.rs
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
pub mod config;
|
||||||
|
pub mod mount;
|
||||||
|
pub mod rc;
|
||||||
141
warpgate/src/rclone/mount.rs
Normal file
141
warpgate/src/rclone/mount.rs
Normal file
@ -0,0 +1,141 @@
|
|||||||
|
//! Manage rclone VFS FUSE mount lifecycle.
|
||||||
|
|
||||||
|
use anyhow::{Context, Result};
|
||||||
|
|
||||||
|
use crate::config::Config;
|
||||||
|
|
||||||
|
use super::config::RCLONE_CONF_PATH;
|
||||||
|
|
||||||
|
/// Build the full `rclone mount` command-line arguments from config.
|
||||||
|
///
|
||||||
|
/// Returns a `Vec<String>` starting with `"mount"` followed by the remote
|
||||||
|
/// source, mount point, and all VFS/cache flags derived from config.
|
||||||
|
pub fn build_mount_args(config: &Config) -> Vec<String> {
|
||||||
|
let mut args = Vec::new();
|
||||||
|
|
||||||
|
// Subcommand and source:dest
|
||||||
|
args.push("mount".into());
|
||||||
|
args.push(format!("nas:{}", config.connection.remote_path));
|
||||||
|
args.push(config.mount.point.display().to_string());
|
||||||
|
|
||||||
|
// Point to our generated rclone.conf
|
||||||
|
args.push("--config".into());
|
||||||
|
args.push(RCLONE_CONF_PATH.into());
|
||||||
|
|
||||||
|
// VFS cache mode — full enables read-through + write-back
|
||||||
|
args.push("--vfs-cache-mode".into());
|
||||||
|
args.push("full".into());
|
||||||
|
|
||||||
|
// Write-back delay
|
||||||
|
args.push("--vfs-write-back".into());
|
||||||
|
args.push(config.writeback.write_back.clone());
|
||||||
|
|
||||||
|
// Cache size limits
|
||||||
|
args.push("--vfs-cache-max-size".into());
|
||||||
|
args.push(config.cache.max_size.clone());
|
||||||
|
|
||||||
|
args.push("--vfs-cache-max-age".into());
|
||||||
|
args.push(config.cache.max_age.clone());
|
||||||
|
|
||||||
|
// NOTE: --vfs-cache-min-free-space requires rclone 1.65+.
|
||||||
|
// Ubuntu apt may ship older versions. We detect support at runtime.
|
||||||
|
if rclone_supports_min_free_space() {
|
||||||
|
args.push("--vfs-cache-min-free-space".into());
|
||||||
|
args.push(config.cache.min_free.clone());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cache directory (SSD path)
|
||||||
|
args.push("--cache-dir".into());
|
||||||
|
args.push(config.cache.dir.display().to_string());
|
||||||
|
|
||||||
|
// Directory listing cache TTL
|
||||||
|
args.push("--dir-cache-time".into());
|
||||||
|
args.push(config.directory_cache.cache_time.clone());
|
||||||
|
|
||||||
|
// Read optimization
|
||||||
|
args.push("--buffer-size".into());
|
||||||
|
args.push(config.read.buffer_size.clone());
|
||||||
|
|
||||||
|
args.push("--vfs-read-chunk-size".into());
|
||||||
|
args.push(config.read.chunk_size.clone());
|
||||||
|
|
||||||
|
args.push("--vfs-read-chunk-size-limit".into());
|
||||||
|
args.push(config.read.chunk_limit.clone());
|
||||||
|
|
||||||
|
args.push("--vfs-read-ahead".into());
|
||||||
|
args.push(config.read.read_ahead.clone());
|
||||||
|
|
||||||
|
// Concurrent transfers for write-back
|
||||||
|
args.push("--transfers".into());
|
||||||
|
args.push(config.writeback.transfers.to_string());
|
||||||
|
|
||||||
|
// Bandwidth limits (only add flag if at least one direction is limited)
|
||||||
|
let bw = format_bwlimit(&config.bandwidth.limit_up, &config.bandwidth.limit_down);
|
||||||
|
if bw != "0" {
|
||||||
|
args.push("--bwlimit".into());
|
||||||
|
args.push(bw);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Enable rclone RC API on default port
|
||||||
|
args.push("--rc".into());
|
||||||
|
|
||||||
|
// Allow non-root users to access the FUSE mount (requires user_allow_other in /etc/fuse.conf)
|
||||||
|
args.push("--allow-other".into());
|
||||||
|
|
||||||
|
args
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Format the `--bwlimit` value from separate up/down strings.
|
||||||
|
///
|
||||||
|
/// rclone accepts `RATE` for symmetric or `UP:DOWN` for asymmetric limits.
|
||||||
|
/// Returns `"0"` when both directions are unlimited.
|
||||||
|
fn format_bwlimit(up: &str, down: &str) -> String {
|
||||||
|
let up_zero = up == "0" || up.is_empty();
|
||||||
|
let down_zero = down == "0" || down.is_empty();
|
||||||
|
|
||||||
|
match (up_zero, down_zero) {
|
||||||
|
(true, true) => "0".into(),
|
||||||
|
_ => format!("{up}:{down}"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if rclone supports `--vfs-cache-min-free-space` (added in v1.65).
|
||||||
|
///
|
||||||
|
/// Runs `rclone mount --help` and checks for the flag in the output.
|
||||||
|
/// Returns false if rclone is not found or the flag is absent.
|
||||||
|
fn rclone_supports_min_free_space() -> bool {
|
||||||
|
std::process::Command::new("rclone")
|
||||||
|
.args(["mount", "--help"])
|
||||||
|
.output()
|
||||||
|
.map(|o| {
|
||||||
|
let stdout = String::from_utf8_lossy(&o.stdout);
|
||||||
|
stdout.contains("--vfs-cache-min-free-space")
|
||||||
|
})
|
||||||
|
.unwrap_or(false)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Build the rclone mount command as a string (for systemd ExecStart).
|
||||||
|
pub fn build_mount_command(config: &Config) -> String {
|
||||||
|
let args = build_mount_args(config);
|
||||||
|
format!("/usr/bin/rclone {}", args.join(" "))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if the FUSE mount is currently active by inspecting `/proc/mounts`.
|
||||||
|
pub fn is_mounted(config: &Config) -> Result<bool> {
|
||||||
|
let mount_point = config.mount.point.display().to_string();
|
||||||
|
|
||||||
|
let content = std::fs::read_to_string("/proc/mounts")
|
||||||
|
.with_context(|| "Failed to read /proc/mounts")?;
|
||||||
|
|
||||||
|
for line in content.lines() {
|
||||||
|
// /proc/mounts format: device mountpoint fstype options dump pass
|
||||||
|
let mut fields = line.split_whitespace();
|
||||||
|
let _device = fields.next();
|
||||||
|
if let Some(mp) = fields.next()
|
||||||
|
&& mp == mount_point {
|
||||||
|
return Ok(true);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(false)
|
||||||
|
}
|
||||||
102
warpgate/src/rclone/rc.rs
Normal file
102
warpgate/src/rclone/rc.rs
Normal file
@ -0,0 +1,102 @@
|
|||||||
|
//! rclone RC (Remote Control) API client.
|
||||||
|
//!
|
||||||
|
//! rclone exposes an HTTP API on localhost when started with `--rc`.
|
||||||
|
//! This module calls those endpoints for runtime status and control.
|
||||||
|
|
||||||
|
use anyhow::{Context, Result};
|
||||||
|
use serde::Deserialize;
|
||||||
|
|
||||||
|
/// Default rclone RC API address.
|
||||||
|
pub const RC_ADDR: &str = "http://127.0.0.1:5572";
|
||||||
|
|
||||||
|
/// Response from `core/stats`.
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
pub struct CoreStats {
|
||||||
|
pub bytes: u64,
|
||||||
|
pub speed: f64,
|
||||||
|
pub transfers: u64,
|
||||||
|
pub errors: u64,
|
||||||
|
#[serde(rename = "totalBytes")]
|
||||||
|
pub total_bytes: Option<u64>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Response from `vfs/stats`.
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
pub struct VfsStats {
|
||||||
|
#[serde(rename = "diskCache")]
|
||||||
|
pub disk_cache: Option<DiskCacheStats>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Deserialize)]
|
||||||
|
pub struct DiskCacheStats {
|
||||||
|
#[serde(rename = "bytesUsed")]
|
||||||
|
pub bytes_used: u64,
|
||||||
|
#[serde(rename = "erroredFiles")]
|
||||||
|
pub errored_files: u64,
|
||||||
|
#[serde(rename = "uploadsInProgress")]
|
||||||
|
pub uploads_in_progress: u64,
|
||||||
|
#[serde(rename = "uploadsQueued")]
|
||||||
|
pub uploads_queued: u64,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Call `core/stats` — transfer statistics.
|
||||||
|
pub fn core_stats() -> Result<CoreStats> {
|
||||||
|
let stats: CoreStats = ureq::post(format!("{RC_ADDR}/core/stats"))
|
||||||
|
.send_json(serde_json::json!({}))?
|
||||||
|
.body_mut()
|
||||||
|
.read_json()
|
||||||
|
.context("Failed to parse core/stats response")?;
|
||||||
|
Ok(stats)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Call `vfs/stats` — VFS cache statistics.
|
||||||
|
pub fn vfs_stats() -> Result<VfsStats> {
|
||||||
|
let stats: VfsStats = ureq::post(format!("{RC_ADDR}/vfs/stats"))
|
||||||
|
.send_json(serde_json::json!({}))?
|
||||||
|
.body_mut()
|
||||||
|
.read_json()
|
||||||
|
.context("Failed to parse vfs/stats response")?;
|
||||||
|
Ok(stats)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Call `vfs/list` — list active VFS instances.
|
||||||
|
pub fn vfs_list(dir: &str) -> Result<serde_json::Value> {
|
||||||
|
let value: serde_json::Value = ureq::post(format!("{RC_ADDR}/vfs/list"))
|
||||||
|
.send_json(serde_json::json!({ "dir": dir }))?
|
||||||
|
.body_mut()
|
||||||
|
.read_json()
|
||||||
|
.context("Failed to parse vfs/list response")?;
|
||||||
|
Ok(value)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Call `vfs/forget` — force directory cache refresh.
|
||||||
|
pub fn vfs_forget(dir: &str) -> Result<()> {
|
||||||
|
ureq::post(format!("{RC_ADDR}/vfs/forget"))
|
||||||
|
.send_json(serde_json::json!({ "dir": dir }))?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Call `core/bwlimit` — get or set bandwidth limits.
|
||||||
|
///
|
||||||
|
/// If both `upload` and `download` are `None`, returns current limits.
|
||||||
|
/// Otherwise sets new limits using rclone's `UP:DOWN` rate format.
|
||||||
|
pub fn bwlimit(upload: Option<&str>, download: Option<&str>) -> Result<serde_json::Value> {
|
||||||
|
let body = match (upload, download) {
|
||||||
|
(None, None) => serde_json::json!({}),
|
||||||
|
(up, down) => {
|
||||||
|
let rate = format!(
|
||||||
|
"{}:{}",
|
||||||
|
up.unwrap_or("off"),
|
||||||
|
down.unwrap_or("off"),
|
||||||
|
);
|
||||||
|
serde_json::json!({ "rate": rate })
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let value: serde_json::Value = ureq::post(format!("{RC_ADDR}/core/bwlimit"))
|
||||||
|
.send_json(&body)?
|
||||||
|
.body_mut()
|
||||||
|
.read_json()
|
||||||
|
.context("Failed to parse core/bwlimit response")?;
|
||||||
|
Ok(value)
|
||||||
|
}
|
||||||
4
warpgate/src/services/mod.rs
Normal file
4
warpgate/src/services/mod.rs
Normal file
@ -0,0 +1,4 @@
|
|||||||
|
pub mod nfs;
|
||||||
|
pub mod samba;
|
||||||
|
pub mod systemd;
|
||||||
|
pub mod webdav;
|
||||||
47
warpgate/src/services/nfs.rs
Normal file
47
warpgate/src/services/nfs.rs
Normal file
@ -0,0 +1,47 @@
|
|||||||
|
//! Generate NFS export configuration.
|
||||||
|
|
||||||
|
use std::fs;
|
||||||
|
use std::path::Path;
|
||||||
|
|
||||||
|
use anyhow::{Context, Result};
|
||||||
|
|
||||||
|
use crate::config::Config;
|
||||||
|
|
||||||
|
/// Default output path for NFS exports.
|
||||||
|
pub const EXPORTS_PATH: &str = "/etc/exports.d/warpgate.exports";
|
||||||
|
|
||||||
|
/// Generate NFS exports entry for the FUSE mount point.
|
||||||
|
///
|
||||||
|
/// Produces a line like:
|
||||||
|
/// ```text
|
||||||
|
/// /mnt/nas-photos 192.168.0.0/24(rw,sync,no_subtree_check,fsid=1)
|
||||||
|
/// ```
|
||||||
|
/// `fsid=1` is required for FUSE-backed mounts because the kernel cannot
|
||||||
|
/// derive a stable fsid from the device number.
|
||||||
|
pub fn generate(config: &Config) -> Result<String> {
|
||||||
|
let mount_point = config.mount.point.display();
|
||||||
|
let network = &config.protocols.nfs_allowed_network;
|
||||||
|
|
||||||
|
let line = format!(
|
||||||
|
"# Generated by Warpgate — do not edit manually.\n\
|
||||||
|
{mount_point} {network}(rw,sync,no_subtree_check,fsid=1)\n"
|
||||||
|
);
|
||||||
|
|
||||||
|
Ok(line)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Write exports file to disk.
|
||||||
|
pub fn write_config(config: &Config) -> Result<()> {
|
||||||
|
let content = generate(config)?;
|
||||||
|
|
||||||
|
// Ensure /etc/exports.d/ exists
|
||||||
|
if let Some(parent) = Path::new(EXPORTS_PATH).parent() {
|
||||||
|
fs::create_dir_all(parent)
|
||||||
|
.with_context(|| format!("Failed to create directory: {}", parent.display()))?;
|
||||||
|
}
|
||||||
|
|
||||||
|
fs::write(EXPORTS_PATH, content)
|
||||||
|
.with_context(|| format!("Failed to write {EXPORTS_PATH}"))?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
72
warpgate/src/services/samba.rs
Normal file
72
warpgate/src/services/samba.rs
Normal file
@ -0,0 +1,72 @@
|
|||||||
|
//! Generate Samba (SMB) configuration.
|
||||||
|
|
||||||
|
use std::fmt::Write as _;
|
||||||
|
use std::fs;
|
||||||
|
use std::path::Path;
|
||||||
|
|
||||||
|
use anyhow::{Context, Result};
|
||||||
|
|
||||||
|
use crate::config::Config;
|
||||||
|
|
||||||
|
/// Default output path for generated smb.conf.
|
||||||
|
pub const SMB_CONF_PATH: &str = "/etc/samba/smb.conf";
|
||||||
|
|
||||||
|
/// Generate smb.conf content that shares the rclone FUSE mount point.
|
||||||
|
pub fn generate(config: &Config) -> Result<String> {
|
||||||
|
let mount_point = config.mount.point.display();
|
||||||
|
|
||||||
|
let mut conf = String::new();
|
||||||
|
|
||||||
|
// [global] section
|
||||||
|
writeln!(conf, "# Generated by Warpgate — do not edit manually.")?;
|
||||||
|
writeln!(conf, "[global]")?;
|
||||||
|
writeln!(conf, " workgroup = WORKGROUP")?;
|
||||||
|
writeln!(conf, " server string = Warpgate NAS Cache")?;
|
||||||
|
writeln!(conf, " server role = standalone server")?;
|
||||||
|
writeln!(conf)?;
|
||||||
|
writeln!(conf, " # Require SMB2+ (disable insecure SMB1)")?;
|
||||||
|
writeln!(conf, " server min protocol = SMB2_02")?;
|
||||||
|
writeln!(conf)?;
|
||||||
|
writeln!(conf, " # Guest / map-to-guest for simple setups")?;
|
||||||
|
writeln!(conf, " map to guest = Bad User")?;
|
||||||
|
writeln!(conf)?;
|
||||||
|
writeln!(conf, " # Logging")?;
|
||||||
|
writeln!(conf, " log file = /var/log/samba/log.%m")?;
|
||||||
|
writeln!(conf, " max log size = 1000")?;
|
||||||
|
writeln!(conf)?;
|
||||||
|
writeln!(conf, " # Disable printer sharing")?;
|
||||||
|
writeln!(conf, " load printers = no")?;
|
||||||
|
writeln!(conf, " printing = bsd")?;
|
||||||
|
writeln!(conf, " printcap name = /dev/null")?;
|
||||||
|
writeln!(conf, " disable spoolss = yes")?;
|
||||||
|
writeln!(conf)?;
|
||||||
|
|
||||||
|
// [nas-photos] share section
|
||||||
|
writeln!(conf, "[nas-photos]")?;
|
||||||
|
writeln!(conf, " comment = Warpgate cached NAS share")?;
|
||||||
|
writeln!(conf, " path = {mount_point}")?;
|
||||||
|
writeln!(conf, " browseable = yes")?;
|
||||||
|
writeln!(conf, " read only = no")?;
|
||||||
|
writeln!(conf, " guest ok = yes")?;
|
||||||
|
writeln!(conf, " force user = root")?;
|
||||||
|
writeln!(conf, " create mask = 0644")?;
|
||||||
|
writeln!(conf, " directory mask = 0755")?;
|
||||||
|
|
||||||
|
Ok(conf)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Write smb.conf to disk.
|
||||||
|
pub fn write_config(config: &Config) -> Result<()> {
|
||||||
|
let content = generate(config)?;
|
||||||
|
|
||||||
|
// Ensure parent directory exists
|
||||||
|
if let Some(parent) = Path::new(SMB_CONF_PATH).parent() {
|
||||||
|
fs::create_dir_all(parent)
|
||||||
|
.with_context(|| format!("Failed to create directory: {}", parent.display()))?;
|
||||||
|
}
|
||||||
|
|
||||||
|
fs::write(SMB_CONF_PATH, content)
|
||||||
|
.with_context(|| format!("Failed to write {SMB_CONF_PATH}"))?;
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
78
warpgate/src/services/systemd.rs
Normal file
78
warpgate/src/services/systemd.rs
Normal file
@ -0,0 +1,78 @@
|
|||||||
|
//! Generate systemd unit files for Warpgate services.
|
||||||
|
|
||||||
|
use std::fmt::Write as _;
|
||||||
|
use std::fs;
|
||||||
|
use std::path::Path;
|
||||||
|
use std::process::Command;
|
||||||
|
|
||||||
|
use anyhow::{Context, Result};
|
||||||
|
|
||||||
|
use crate::config::Config;
|
||||||
|
|
||||||
|
/// Target directory for systemd unit files.
|
||||||
|
pub const SYSTEMD_DIR: &str = "/etc/systemd/system";
|
||||||
|
|
||||||
|
/// Single unified service unit name.
|
||||||
|
pub const RUN_SERVICE: &str = "warpgate.service";
|
||||||
|
|
||||||
|
/// Generate the single `warpgate.service` unit that runs `warpgate run`.
|
||||||
|
///
|
||||||
|
/// This replaces the old multi-unit approach. The `warpgate run` supervisor
|
||||||
|
/// manages rclone mount, SMB, NFS, and WebDAV internally.
|
||||||
|
pub fn generate_run_unit(_config: &Config) -> Result<String> {
|
||||||
|
let mut unit = String::new();
|
||||||
|
writeln!(unit, "# Generated by Warpgate — do not edit manually.")?;
|
||||||
|
writeln!(unit, "[Unit]")?;
|
||||||
|
writeln!(unit, "Description=Warpgate NAS cache proxy")?;
|
||||||
|
writeln!(unit, "After=network-online.target")?;
|
||||||
|
writeln!(unit, "Wants=network-online.target")?;
|
||||||
|
writeln!(unit)?;
|
||||||
|
writeln!(unit, "[Service]")?;
|
||||||
|
writeln!(unit, "Type=simple")?;
|
||||||
|
writeln!(unit, "ExecStart=/usr/local/bin/warpgate run")?;
|
||||||
|
writeln!(unit, "Restart=on-failure")?;
|
||||||
|
writeln!(unit, "RestartSec=10")?;
|
||||||
|
writeln!(unit, "KillMode=mixed")?;
|
||||||
|
writeln!(unit, "TimeoutStopSec=30")?;
|
||||||
|
writeln!(unit)?;
|
||||||
|
writeln!(unit, "[Install]")?;
|
||||||
|
writeln!(unit, "WantedBy=multi-user.target")?;
|
||||||
|
|
||||||
|
Ok(unit)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Install the single `warpgate.service` unit and reload systemd.
|
||||||
|
pub fn install_run_unit(config: &Config) -> Result<()> {
|
||||||
|
let systemd_dir = Path::new(SYSTEMD_DIR);
|
||||||
|
|
||||||
|
let unit_content = generate_run_unit(config)?;
|
||||||
|
fs::write(systemd_dir.join(RUN_SERVICE), unit_content)
|
||||||
|
.with_context(|| format!("Failed to write {RUN_SERVICE}"))?;
|
||||||
|
|
||||||
|
// Reload systemd daemon
|
||||||
|
let status = Command::new("systemctl")
|
||||||
|
.arg("daemon-reload")
|
||||||
|
.status()
|
||||||
|
.context("Failed to run systemctl daemon-reload")?;
|
||||||
|
|
||||||
|
if !status.success() {
|
||||||
|
anyhow::bail!("systemctl daemon-reload failed with exit code: {}", status);
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Enable and start the single `warpgate.service`.
|
||||||
|
pub fn enable_and_start_run() -> Result<()> {
|
||||||
|
let status = Command::new("systemctl")
|
||||||
|
.args(["enable", "--now", RUN_SERVICE])
|
||||||
|
.status()
|
||||||
|
.with_context(|| format!("Failed to run systemctl enable --now {RUN_SERVICE}"))?;
|
||||||
|
|
||||||
|
if !status.success() {
|
||||||
|
anyhow::bail!("systemctl enable --now {RUN_SERVICE} failed with exit code: {status}");
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
24
warpgate/src/services/webdav.rs
Normal file
24
warpgate/src/services/webdav.rs
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
//! WebDAV service management via rclone serve webdav.
|
||||||
|
|
||||||
|
use crate::config::Config;
|
||||||
|
|
||||||
|
/// Build the `rclone serve webdav` command arguments.
|
||||||
|
pub fn build_serve_args(config: &Config) -> Vec<String> {
|
||||||
|
let mount_point = config.mount.point.display().to_string();
|
||||||
|
let addr = format!("0.0.0.0:{}", config.protocols.webdav_port);
|
||||||
|
|
||||||
|
vec![
|
||||||
|
"serve".into(),
|
||||||
|
"webdav".into(),
|
||||||
|
mount_point,
|
||||||
|
"--addr".into(),
|
||||||
|
addr,
|
||||||
|
"--read-only=false".into(),
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Build the full command string (for systemd ExecStart).
|
||||||
|
pub fn build_serve_command(config: &Config) -> String {
|
||||||
|
let args = build_serve_args(config);
|
||||||
|
format!("/usr/bin/rclone {}", args.join(" "))
|
||||||
|
}
|
||||||
420
warpgate/src/supervisor.rs
Normal file
420
warpgate/src/supervisor.rs
Normal file
@ -0,0 +1,420 @@
|
|||||||
|
//! `warpgate run` — single-process supervisor for all services.
|
||||||
|
//!
|
||||||
|
//! Manages rclone mount + protocol services in one process tree with
|
||||||
|
//! coordinated startup and shutdown. Designed to run as a systemd unit
|
||||||
|
//! or standalone (Docker-friendly).
|
||||||
|
|
||||||
|
use std::process::{Child, Command};
|
||||||
|
use std::sync::atomic::{AtomicBool, Ordering};
|
||||||
|
use std::sync::Arc;
|
||||||
|
use std::thread;
|
||||||
|
use std::time::{Duration, Instant};
|
||||||
|
|
||||||
|
use anyhow::{Context, Result};
|
||||||
|
|
||||||
|
use crate::config::Config;
|
||||||
|
use crate::rclone::mount::{build_mount_args, is_mounted};
|
||||||
|
use crate::services::{nfs, samba, webdav};
|
||||||
|
|
||||||
|
/// Mount ready timeout.
|
||||||
|
const MOUNT_TIMEOUT: Duration = Duration::from_secs(30);
|
||||||
|
/// Supervision loop poll interval.
|
||||||
|
const POLL_INTERVAL: Duration = Duration::from_secs(2);
|
||||||
|
/// Grace period for SIGTERM before escalating to SIGKILL.
|
||||||
|
const SIGTERM_GRACE: Duration = Duration::from_secs(3);
|
||||||
|
/// Max restart attempts before giving up on a protocol service.
|
||||||
|
const MAX_RESTARTS: u32 = 3;
|
||||||
|
/// Reset restart counter after this period of stable running.
|
||||||
|
const RESTART_STABLE_PERIOD: Duration = Duration::from_secs(300);
|
||||||
|
|
||||||
|
/// Tracks restart attempts for a supervised child process.
|
||||||
|
struct RestartTracker {
|
||||||
|
count: u32,
|
||||||
|
last_restart: Option<Instant>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl RestartTracker {
|
||||||
|
fn new() -> Self {
|
||||||
|
Self {
|
||||||
|
count: 0,
|
||||||
|
last_restart: None,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns true if another restart is allowed. Resets counter if the
|
||||||
|
/// service has been stable for `RESTART_STABLE_PERIOD`.
|
||||||
|
fn can_restart(&mut self) -> bool {
|
||||||
|
if let Some(last) = self.last_restart
|
||||||
|
&& last.elapsed() >= RESTART_STABLE_PERIOD
|
||||||
|
{
|
||||||
|
self.count = 0;
|
||||||
|
}
|
||||||
|
self.count < MAX_RESTARTS
|
||||||
|
}
|
||||||
|
|
||||||
|
fn record_restart(&mut self) {
|
||||||
|
self.count += 1;
|
||||||
|
self.last_restart = Some(Instant::now());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Child processes for protocol servers managed by the supervisor.
|
||||||
|
///
|
||||||
|
/// Implements `Drop` to kill any spawned children — prevents orphaned
|
||||||
|
/// processes if startup fails partway through `start_protocols()`.
|
||||||
|
struct ProtocolChildren {
|
||||||
|
smbd: Option<Child>,
|
||||||
|
webdav: Option<Child>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Drop for ProtocolChildren {
|
||||||
|
fn drop(&mut self) {
|
||||||
|
for child in [&mut self.smbd, &mut self.webdav].into_iter().flatten() {
|
||||||
|
graceful_kill(child);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Entry point — called from main.rs for `warpgate run`.
|
||||||
|
pub fn run(config: &Config) -> Result<()> {
|
||||||
|
let shutdown = Arc::new(AtomicBool::new(false));
|
||||||
|
|
||||||
|
// Install signal handler (SIGTERM + SIGINT)
|
||||||
|
let shutdown_flag = Arc::clone(&shutdown);
|
||||||
|
ctrlc::set_handler(move || {
|
||||||
|
eprintln!("Signal received, shutting down...");
|
||||||
|
shutdown_flag.store(true, Ordering::SeqCst);
|
||||||
|
})
|
||||||
|
.context("Failed to set signal handler")?;
|
||||||
|
|
||||||
|
// Phase 1: Preflight — generate configs, create dirs
|
||||||
|
println!("Preflight checks...");
|
||||||
|
preflight(config)?;
|
||||||
|
|
||||||
|
// Phase 2: Start rclone mount and wait for it to become ready
|
||||||
|
println!("Starting rclone mount...");
|
||||||
|
let mut mount_child = start_and_wait_mount(config, &shutdown)?;
|
||||||
|
println!("Mount ready at {}", config.mount.point.display());
|
||||||
|
|
||||||
|
// Phase 3: Start protocol services
|
||||||
|
if shutdown.load(Ordering::SeqCst) {
|
||||||
|
println!("Shutdown signal received during mount.");
|
||||||
|
let _ = mount_child.kill();
|
||||||
|
let _ = mount_child.wait();
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
println!("Starting protocol services...");
|
||||||
|
let mut protocols = start_protocols(config)?;
|
||||||
|
|
||||||
|
// Phase 4: Supervision loop
|
||||||
|
println!("Supervision active. Press Ctrl+C to stop.");
|
||||||
|
let result = supervise(config, &mut mount_child, &mut protocols, Arc::clone(&shutdown));
|
||||||
|
|
||||||
|
// Phase 5: Teardown (always runs)
|
||||||
|
println!("Shutting down...");
|
||||||
|
shutdown_services(config, &mut mount_child, &mut protocols);
|
||||||
|
|
||||||
|
result
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Write configs and create directories. Reuses existing modules.
|
||||||
|
fn preflight(config: &Config) -> Result<()> {
|
||||||
|
// Ensure mount point exists
|
||||||
|
std::fs::create_dir_all(&config.mount.point).with_context(|| {
|
||||||
|
format!(
|
||||||
|
"Failed to create mount point: {}",
|
||||||
|
config.mount.point.display()
|
||||||
|
)
|
||||||
|
})?;
|
||||||
|
|
||||||
|
// Ensure cache directory exists
|
||||||
|
std::fs::create_dir_all(&config.cache.dir).with_context(|| {
|
||||||
|
format!(
|
||||||
|
"Failed to create cache dir: {}",
|
||||||
|
config.cache.dir.display()
|
||||||
|
)
|
||||||
|
})?;
|
||||||
|
|
||||||
|
// Generate rclone config
|
||||||
|
crate::rclone::config::write_config(config)?;
|
||||||
|
|
||||||
|
// Generate protocol configs
|
||||||
|
if config.protocols.enable_smb {
|
||||||
|
samba::write_config(config)?;
|
||||||
|
}
|
||||||
|
if config.protocols.enable_nfs {
|
||||||
|
nfs::write_config(config)?;
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Spawn rclone mount process and poll until the FUSE mount appears.
|
||||||
|
fn start_and_wait_mount(config: &Config, shutdown: &AtomicBool) -> Result<Child> {
|
||||||
|
let args = build_mount_args(config);
|
||||||
|
|
||||||
|
let mut child = Command::new("rclone")
|
||||||
|
.args(&args)
|
||||||
|
.spawn()
|
||||||
|
.context("Failed to spawn rclone mount")?;
|
||||||
|
|
||||||
|
// Poll for mount readiness
|
||||||
|
let deadline = Instant::now() + MOUNT_TIMEOUT;
|
||||||
|
loop {
|
||||||
|
// Check for shutdown signal (e.g. Ctrl+C during mount wait)
|
||||||
|
if shutdown.load(Ordering::SeqCst) {
|
||||||
|
let _ = child.kill();
|
||||||
|
let _ = child.wait();
|
||||||
|
anyhow::bail!("Interrupted while waiting for mount");
|
||||||
|
}
|
||||||
|
|
||||||
|
if Instant::now() > deadline {
|
||||||
|
let _ = child.kill();
|
||||||
|
let _ = child.wait();
|
||||||
|
anyhow::bail!(
|
||||||
|
"Timed out waiting for mount at {} ({}s)",
|
||||||
|
config.mount.point.display(),
|
||||||
|
MOUNT_TIMEOUT.as_secs()
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Detect early rclone exit (e.g. bad config, auth failure)
|
||||||
|
match child.try_wait() {
|
||||||
|
Ok(Some(status)) => {
|
||||||
|
anyhow::bail!("rclone mount exited immediately ({status}). Check remote/auth config.");
|
||||||
|
}
|
||||||
|
Ok(None) => {} // still running, good
|
||||||
|
Err(e) => {
|
||||||
|
anyhow::bail!("Failed to check rclone mount status: {e}");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
match is_mounted(config) {
|
||||||
|
Ok(true) => break,
|
||||||
|
Ok(false) => {}
|
||||||
|
Err(e) => eprintln!("Warning: mount check failed: {e}"),
|
||||||
|
}
|
||||||
|
|
||||||
|
thread::sleep(Duration::from_millis(500));
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(child)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Spawn smbd as a foreground child process.
|
||||||
|
fn spawn_smbd() -> Result<Child> {
|
||||||
|
Command::new("smbd")
|
||||||
|
.args(["-F", "-S", "-N", "-s", samba::SMB_CONF_PATH])
|
||||||
|
.spawn()
|
||||||
|
.context("Failed to spawn smbd")
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Start protocol services after the mount is ready.
|
||||||
|
///
|
||||||
|
/// - SMB: spawn `smbd -F` as a child process
|
||||||
|
/// - NFS: `exportfs -ra`
|
||||||
|
/// - WebDAV: spawn `rclone serve webdav` as a child process
|
||||||
|
fn start_protocols(config: &Config) -> Result<ProtocolChildren> {
|
||||||
|
let smbd = if config.protocols.enable_smb {
|
||||||
|
let child = spawn_smbd()?;
|
||||||
|
println!(" SMB: started");
|
||||||
|
Some(child)
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
};
|
||||||
|
|
||||||
|
if config.protocols.enable_nfs {
|
||||||
|
let status = Command::new("exportfs")
|
||||||
|
.arg("-ra")
|
||||||
|
.status()
|
||||||
|
.context("Failed to run exportfs -ra")?;
|
||||||
|
if !status.success() {
|
||||||
|
anyhow::bail!("exportfs -ra failed: {status}");
|
||||||
|
}
|
||||||
|
println!(" NFS: exported");
|
||||||
|
}
|
||||||
|
|
||||||
|
let webdav = if config.protocols.enable_webdav {
|
||||||
|
let child = spawn_webdav(config)?;
|
||||||
|
println!(" WebDAV: started");
|
||||||
|
Some(child)
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
};
|
||||||
|
|
||||||
|
Ok(ProtocolChildren { smbd, webdav })
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Spawn a `rclone serve webdav` child process.
|
||||||
|
fn spawn_webdav(config: &Config) -> Result<Child> {
|
||||||
|
let args = webdav::build_serve_args(config);
|
||||||
|
Command::new("rclone")
|
||||||
|
.args(&args)
|
||||||
|
.spawn()
|
||||||
|
.context("Failed to spawn rclone serve webdav")
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Main supervision loop. Polls child processes every 2s.
|
||||||
|
///
|
||||||
|
/// - If rclone mount dies → full shutdown (data safety: dirty files may be in flight).
|
||||||
|
/// - If smbd/WebDAV dies → restart up to 3 times (counter resets after 5 min stable).
|
||||||
|
/// - Checks shutdown flag set by signal handler.
|
||||||
|
fn supervise(
|
||||||
|
config: &Config,
|
||||||
|
mount: &mut Child,
|
||||||
|
protocols: &mut ProtocolChildren,
|
||||||
|
shutdown: Arc<AtomicBool>,
|
||||||
|
) -> Result<()> {
|
||||||
|
let mut smbd_tracker = RestartTracker::new();
|
||||||
|
let mut webdav_tracker = RestartTracker::new();
|
||||||
|
|
||||||
|
loop {
|
||||||
|
// Check for shutdown signal
|
||||||
|
if shutdown.load(Ordering::SeqCst) {
|
||||||
|
println!("Shutdown signal received.");
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check rclone mount process
|
||||||
|
match mount.try_wait() {
|
||||||
|
Ok(Some(status)) => {
|
||||||
|
anyhow::bail!(
|
||||||
|
"rclone mount exited unexpectedly ({}). Initiating full shutdown for data safety.",
|
||||||
|
status
|
||||||
|
);
|
||||||
|
}
|
||||||
|
Ok(None) => {} // still running
|
||||||
|
Err(e) => {
|
||||||
|
anyhow::bail!("Failed to check rclone mount status: {e}");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check smbd process (if enabled)
|
||||||
|
if let Some(child) = &mut protocols.smbd {
|
||||||
|
match child.try_wait() {
|
||||||
|
Ok(Some(status)) => {
|
||||||
|
eprintln!("smbd exited ({status}).");
|
||||||
|
if smbd_tracker.can_restart() {
|
||||||
|
smbd_tracker.record_restart();
|
||||||
|
let delay = smbd_tracker.count * 2;
|
||||||
|
eprintln!(
|
||||||
|
"Restarting smbd in {delay}s ({}/{MAX_RESTARTS})...",
|
||||||
|
smbd_tracker.count,
|
||||||
|
);
|
||||||
|
thread::sleep(Duration::from_secs(delay.into()));
|
||||||
|
match spawn_smbd() {
|
||||||
|
Ok(new_child) => *child = new_child,
|
||||||
|
Err(e) => {
|
||||||
|
eprintln!("Failed to restart smbd: {e}");
|
||||||
|
protocols.smbd = None;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
eprintln!(
|
||||||
|
"smbd exceeded max restarts ({MAX_RESTARTS}), giving up."
|
||||||
|
);
|
||||||
|
protocols.smbd = None;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(None) => {} // still running
|
||||||
|
Err(e) => eprintln!("Warning: failed to check smbd status: {e}"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check WebDAV process (if enabled)
|
||||||
|
if let Some(child) = &mut protocols.webdav {
|
||||||
|
match child.try_wait() {
|
||||||
|
Ok(Some(status)) => {
|
||||||
|
eprintln!("WebDAV exited ({status}).");
|
||||||
|
if webdav_tracker.can_restart() {
|
||||||
|
webdav_tracker.record_restart();
|
||||||
|
let delay = webdav_tracker.count * 2;
|
||||||
|
eprintln!(
|
||||||
|
"Restarting WebDAV in {delay}s ({}/{MAX_RESTARTS})...",
|
||||||
|
webdav_tracker.count,
|
||||||
|
);
|
||||||
|
thread::sleep(Duration::from_secs(delay.into()));
|
||||||
|
match spawn_webdav(config) {
|
||||||
|
Ok(new_child) => *child = new_child,
|
||||||
|
Err(e) => {
|
||||||
|
eprintln!("Failed to restart WebDAV: {e}");
|
||||||
|
protocols.webdav = None;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
eprintln!(
|
||||||
|
"WebDAV exceeded max restarts ({MAX_RESTARTS}), giving up."
|
||||||
|
);
|
||||||
|
protocols.webdav = None;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(None) => {} // still running
|
||||||
|
Err(e) => eprintln!("Warning: failed to check WebDAV status: {e}"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
thread::sleep(POLL_INTERVAL);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Send SIGTERM, wait up to `SIGTERM_GRACE`, then SIGKILL if still alive.
|
||||||
|
///
|
||||||
|
/// smbd forks worker processes per client connection — SIGTERM lets
|
||||||
|
/// the parent signal its children to exit cleanly. SIGKILL would
|
||||||
|
/// orphan those workers.
|
||||||
|
fn graceful_kill(child: &mut Child) {
|
||||||
|
let pid = child.id() as i32;
|
||||||
|
// SAFETY: sending a signal to a known child PID is safe.
|
||||||
|
unsafe { libc::kill(pid, libc::SIGTERM) };
|
||||||
|
|
||||||
|
let deadline = Instant::now() + SIGTERM_GRACE;
|
||||||
|
loop {
|
||||||
|
match child.try_wait() {
|
||||||
|
Ok(Some(_)) => return, // exited cleanly
|
||||||
|
Ok(None) => {}
|
||||||
|
Err(_) => break,
|
||||||
|
}
|
||||||
|
if Instant::now() > deadline {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
thread::sleep(Duration::from_millis(100));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Still alive after grace period — escalate
|
||||||
|
let _ = child.kill(); // SIGKILL
|
||||||
|
let _ = child.wait();
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Reverse-order teardown of all services.
|
||||||
|
///
|
||||||
|
/// Order: stop smbd → unexport NFS → kill WebDAV → unmount FUSE → kill rclone.
|
||||||
|
fn shutdown_services(config: &Config, mount: &mut Child, protocols: &mut ProtocolChildren) {
|
||||||
|
// Stop SMB
|
||||||
|
if let Some(child) = &mut protocols.smbd {
|
||||||
|
graceful_kill(child);
|
||||||
|
println!(" SMB: stopped");
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unexport NFS
|
||||||
|
if config.protocols.enable_nfs {
|
||||||
|
let _ = Command::new("exportfs").arg("-ua").status();
|
||||||
|
println!(" NFS: unexported");
|
||||||
|
}
|
||||||
|
|
||||||
|
// Kill WebDAV
|
||||||
|
if let Some(child) = &mut protocols.webdav {
|
||||||
|
graceful_kill(child);
|
||||||
|
println!(" WebDAV: stopped");
|
||||||
|
}
|
||||||
|
|
||||||
|
// Lazy unmount FUSE
|
||||||
|
let mount_point = config.mount.point.display().to_string();
|
||||||
|
let _ = Command::new("fusermount")
|
||||||
|
.args(["-uz", &mount_point])
|
||||||
|
.status();
|
||||||
|
println!(" FUSE: unmounted");
|
||||||
|
|
||||||
|
// Gracefully stop rclone
|
||||||
|
graceful_kill(mount);
|
||||||
|
println!(" rclone: stopped");
|
||||||
|
}
|
||||||
72
warpgate/templates/config.toml.default
Normal file
72
warpgate/templates/config.toml.default
Normal file
@ -0,0 +1,72 @@
|
|||||||
|
# Warpgate Configuration
|
||||||
|
# See: https://github.com/user/warpgate for documentation
|
||||||
|
|
||||||
|
[connection]
|
||||||
|
# Remote NAS Tailscale IP or hostname
|
||||||
|
nas_host = "100.x.x.x"
|
||||||
|
# SFTP username
|
||||||
|
nas_user = "admin"
|
||||||
|
# SFTP password (prefer key_file for security)
|
||||||
|
# nas_pass = "your-password"
|
||||||
|
# Path to SSH private key (recommended)
|
||||||
|
# nas_key_file = "/root/.ssh/id_ed25519"
|
||||||
|
# Target directory on NAS
|
||||||
|
remote_path = "/volume1/photos"
|
||||||
|
# SFTP port
|
||||||
|
sftp_port = 22
|
||||||
|
# SFTP connection pool size
|
||||||
|
sftp_connections = 8
|
||||||
|
|
||||||
|
[cache]
|
||||||
|
# Cache storage directory (should be on SSD, prefer btrfs/ZFS filesystem)
|
||||||
|
dir = "/mnt/ssd/warpgate"
|
||||||
|
# Max cache size (leave room for dirty files during offline writes)
|
||||||
|
max_size = "200G"
|
||||||
|
# Max cache retention time
|
||||||
|
max_age = "720h"
|
||||||
|
# Minimum free space on cache disk
|
||||||
|
min_free = "10G"
|
||||||
|
|
||||||
|
[read]
|
||||||
|
# Chunk size for large file reads
|
||||||
|
chunk_size = "256M"
|
||||||
|
# Max chunk auto-growth limit
|
||||||
|
chunk_limit = "1G"
|
||||||
|
# Read-ahead buffer for sequential reads
|
||||||
|
read_ahead = "512M"
|
||||||
|
# In-memory buffer size
|
||||||
|
buffer_size = "256M"
|
||||||
|
|
||||||
|
[bandwidth]
|
||||||
|
# Upload (write-back) speed limit ("0" = unlimited)
|
||||||
|
limit_up = "0"
|
||||||
|
# Download (cache pull) speed limit ("0" = unlimited)
|
||||||
|
limit_down = "0"
|
||||||
|
# Enable adaptive write-back throttling (auto-reduce on congestion)
|
||||||
|
adaptive = true
|
||||||
|
|
||||||
|
[writeback]
|
||||||
|
# Delay before async write-back to NAS
|
||||||
|
write_back = "5s"
|
||||||
|
# Concurrent upload transfers
|
||||||
|
transfers = 4
|
||||||
|
|
||||||
|
[directory_cache]
|
||||||
|
# Directory listing cache TTL (lower = faster remote change detection)
|
||||||
|
cache_time = "1h"
|
||||||
|
|
||||||
|
[protocols]
|
||||||
|
# Enable SMB (Samba) sharing — primary for macOS/Windows
|
||||||
|
enable_smb = true
|
||||||
|
# Enable NFS export — for Linux clients
|
||||||
|
enable_nfs = false
|
||||||
|
# Enable WebDAV service — for mobile clients
|
||||||
|
enable_webdav = false
|
||||||
|
# NFS allowed network CIDR
|
||||||
|
nfs_allowed_network = "192.168.0.0/24"
|
||||||
|
# WebDAV listen port
|
||||||
|
webdav_port = 8080
|
||||||
|
|
||||||
|
[mount]
|
||||||
|
# FUSE mount point (all protocols share this)
|
||||||
|
point = "/mnt/nas-photos"
|
||||||
Loading…
x
Reference in New Issue
Block a user