Compare commits

...

13 Commits

Author SHA1 Message Date
86766cc004 feat: add warmup schedule (cron) field to config UI
The backend already supports warmup_schedule for periodic cache warmup,
but the field was missing from the web config editor.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 01:39:21 +08:00
d2b9f46b1a feat: add Apply Config progress modal and fix stale PENDING health after reload
- Add 4-step progress modal to config apply flow (validate, write, reload, services ready)
- Poll SSE-updated data-share-health attributes to detect when services finish restarting
- Fix stale health bug: recalculate health for affected shares based on actual mount
  success instead of preserving old health from before reload
- Add modal overlay/card/step CSS matching the dark theme
- Include connection refactor (multi-protocol support) and probe helpers from prior work

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-20 01:11:50 +08:00
85e682c815 Add rclone connection probe helpers and config UI styles 2026-02-19 23:20:50 +08:00
3a858431f1 Fix config test button lock and add backend timeout 2026-02-19 23:18:09 +08:00
d5b83a0075 fix: SSE 实时更新同步状态指示器
sync-status 区域未包含在 SSE OOB swap 中,导致页面加载后
状态永远不会更新。新增 SyncStatusPartial 模板并加入 SSE
payload,使 dirty count 归零时 UI 能实时切换。

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-19 17:30:08 +08:00
faf9d80824 feat: fill implementation gaps — preset unification, cron, adaptive bw, update cmd, tests
Step 1 — Unify preset logic (eliminate dual implementation)
- src/cli/preset.rs: add missing fields (chunk_limit, multi_thread_streams,
  multi_thread_cutoff), fix Office buffer_size 64M→128M, implement FromStr
- src/web/api.rs: post_preset() now calls Preset::apply() — no more inlined
  params; Office write_back unified to 5s (was 3s in API)

Step 2 — Fix setup.rs connection test: warn→bail
- All 4 "Warning: Could not connect/resolve" prints replaced with anyhow::bail!
  matching deploy/setup.rs behavior

Step 3 — Web UI: add [web] and [notifications] edit sections
- templates/web/tabs/config.html: new collapsible Web UI (password) and
  Notifications (webhook_url, cache_threshold_pct, nas_offline_minutes,
  writeback_depth) sections, both tagged "No restart"
- Also adds [log] section (file path + level select, "Full restart")

Step 4 — Full cron expression support in warmup scheduler
- Cargo.toml: add cron = "0.12", chrono = "0.4"
- supervisor.rs: normalize_cron_schedule() converts 5-field standard cron to
  7-field cron crate format; replaces naive hour-only matching

Step 5 — Adaptive bandwidth algorithm
- supervisor.rs: extract compute_adaptive_limit() pure function; sliding
  window of 6 samples, cv>0.3→congested (−25%, floor 1MiB/s), stable
  near-limit→maintain, under-utilizing→+10% (capped at limit_up)

Step 6 — warpgate update command
- src/cli/update.rs: query GitHub Releases API, compare with CARGO_PKG_VERSION
- src/main.rs: add Update{apply}, SetupWifi, CloneMac{interface} commands
- src/cli/wifi.rs: TODO stub for WiFi AP setup

Unit tests (+35, total 188→223)
- cli/preset.rs: 10 tests — FromStr, all fields for each preset, idempotency,
  connection/share isolation, write_back consistency regression
- supervisor.rs: 14 tests — normalize_cron_schedule (5 cases),
  compute_adaptive_limit (9 cases: congestion, floor, stable, under-utilizing,
  cap, zero-current, zero-max, empty window)
- config.rs: 11 tests — WebConfig (3), NotificationsConfig (4), LogConfig (4)

Shell tests (+4 scripts)
- tests/09-cli/test-preset-cli.sh: preset CLI without daemon; checks all
  three presets write correct values including unified buffer_size/write_back
- tests/09-cli/test-update-command.sh: update command; skips on no-network
- tests/10-scheduled/test-cron-warmup-schedule.sh: "* * * * *" fires in <90s
- tests/10-scheduled/test-adaptive-bandwidth.sh: adaptive loop stability
- tests/harness/config-gen.sh: add warmup.warmup_schedule override support
- tests/run-all.sh: add 10-scheduled category

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-19 16:55:00 +08:00
a11c899d71 fix: resolve merge conflicts — unify daemon.rs, dedup supervisor Reconnect arm, add missing Config fields in setup.rs 2026-02-19 15:46:41 +08:00
e67c11b215 merge: setup wizard, preset, reconnect, pre-deploy probe, status sync indicator 2026-02-19 15:45:22 +08:00
d4bc2dd59d merge: backend config, web auth, notifications, scheduled warmup, nas_offline/all_synced 2026-02-19 15:45:17 +08:00
ee9ac2ce2d feat: Web UI — offline banner, sync indicator, preset buttons, reconnect button
- Task A: Offline mode banner in layout (nas_offline field in LayoutTemplate)
- Task B: Safe-to-disconnect sync indicator on dashboard (all_synced field)
- Task C: Preset apply buttons (photographer/video/office) in config tab with POST /api/preset/{profile} endpoint
- Task D: Reconnect button and error banner in share detail panel
- Added nas_offline/all_synced fields to DaemonStatus for integration contract

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 15:44:36 +08:00
e05165f136 feat: backend — web auth, notifications, scheduled warmup, nas_offline/all_synced, reconnect API
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 15:44:17 +08:00
455fb349cd Add warpgate setup wizard, preset command, reconnect command, pre-deploy probe, sync indicator
- warpgate setup: interactive Q&A wizard for first-time configuration
- warpgate preset: apply photographer/video/office presets from PRD §9
- warpgate reconnect: re-probe + re-mount share without full restart
- warpgate deploy: test NAS SFTP connectivity before installing systemd
- warpgate status: show 'All synced — safe to disconnect' indicator
- SupervisorCmd::Reconnect(String) for daemon command channel

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-19 15:37:55 +08:00
a8fe1859e3 Add web auth, notifications, scheduled warmup, NAS offline state
- config: add [web] password field for HTTP Basic Auth
- config: add [notifications] webhook URL + thresholds
- config: add warmup.warmup_schedule for nightly cache warmup
- daemon: add nas_offline, all_synced, notification tracking to DaemonStatus
- daemon: add SupervisorCmd::Reconnect(String) for share reconnect
- supervisor: compute nas_offline/all_synced each poll cycle
- supervisor: send webhook notifications (NAS offline, writeback depth)
- supervisor: handle Reconnect command (kill+reset share for re-probe)
- supervisor: scheduled warmup based on warmup_schedule cron hour
- web/mod: HTTP Basic Auth middleware (when web.password is set)
- web/api: expose nas_offline, all_synced in status endpoint
- web/api: POST /api/reconnect/{share} endpoint

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-19 15:37:05 +08:00
44 changed files with 4092 additions and 305 deletions

216
Cargo.lock generated
View File

@ -17,6 +17,15 @@ dependencies = [
"memchr",
]
[[package]]
name = "android_system_properties"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "819e7219dbd41043ac279b19830f2efc897156490d7fd6ea916720117ee66311"
dependencies = [
"libc",
]
[[package]]
name = "anstream"
version = "0.6.21"
@ -131,6 +140,12 @@ version = "1.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1505bd5d3d116872e7271a6d4e16d81d0c8570876c8de68093a09ac269d8aac0"
[[package]]
name = "autocfg"
version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c08606f8c3cbf4ce6ec8e28fb0014a2c086708fe954eaa885384a6165172e7e8"
[[package]]
name = "axum"
version = "0.8.8"
@ -213,6 +228,12 @@ dependencies = [
"objc2",
]
[[package]]
name = "bumpalo"
version = "3.20.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5c6f81257d10a0f602a294ae4182251151ff97dbb504ef9afcdda4a64b24d9b4"
[[package]]
name = "bytes"
version = "1.11.1"
@ -241,6 +262,19 @@ version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "613afe47fcd5fac7ccf1db93babcb082c5994d996f20b8b159f2ad1658eb5724"
[[package]]
name = "chrono"
version = "0.4.43"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fac4744fb15ae8337dc853fee7fb3f4e48c0fbaa23d0afe49c447b4fab126118"
dependencies = [
"iana-time-zone",
"js-sys",
"num-traits",
"wasm-bindgen",
"windows-link",
]
[[package]]
name = "clap"
version = "4.5.59"
@ -316,6 +350,12 @@ dependencies = [
"url",
]
[[package]]
name = "core-foundation-sys"
version = "0.8.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "773648b94d0e5d620f64f280777445740e61fe701025087ec8b57f45c791888b"
[[package]]
name = "crc32fast"
version = "1.5.0"
@ -325,6 +365,17 @@ dependencies = [
"cfg-if",
]
[[package]]
name = "cron"
version = "0.12.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6f8c3e73077b4b4a6ab1ea5047c37c57aee77657bc8ecd6f29b0af082d0b0c07"
dependencies = [
"chrono",
"nom",
"once_cell",
]
[[package]]
name = "crossbeam-channel"
version = "0.5.15"
@ -566,6 +617,30 @@ dependencies = [
"tower-service",
]
[[package]]
name = "iana-time-zone"
version = "0.1.65"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e31bc9ad994ba00e440a8aa5c9ef0ec67d5cb5e5cb0cc7f8b744a35b389cc470"
dependencies = [
"android_system_properties",
"core-foundation-sys",
"iana-time-zone-haiku",
"js-sys",
"log",
"wasm-bindgen",
"windows-core",
]
[[package]]
name = "iana-time-zone-haiku"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f31827a206f56af32e590ba56d5d2d085f558508192593743f16b2306495269f"
dependencies = [
"cc",
]
[[package]]
name = "icu_collections"
version = "2.1.1"
@ -690,6 +765,16 @@ version = "1.0.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "92ecc6618181def0457392ccd0ee51198e065e016d1d527a7ac1b6dc7c1f09d2"
[[package]]
name = "js-sys"
version = "0.3.85"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8c942ebf8e95485ca0d52d97da7c5a2c387d0e7f0ba4c35e93bfcaee045955b3"
dependencies = [
"once_cell",
"wasm-bindgen",
]
[[package]]
name = "lazy_static"
version = "1.5.0"
@ -747,6 +832,12 @@ version = "0.3.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6877bb514081ee2a7ff5ef9de3281f14a4dd4bceac4c09388074a6b5df8a139a"
[[package]]
name = "minimal-lexical"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "68354c5c6bd36d73ff3feceb05efa59b6acb7626617f4962be322a825e61f79a"
[[package]]
name = "miniz_oxide"
version = "0.8.9"
@ -780,6 +871,16 @@ dependencies = [
"libc",
]
[[package]]
name = "nom"
version = "7.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d273983c5a657a70a3e8f2a01329822f3b8c8172b73826411a55751e404a0a4a"
dependencies = [
"memchr",
"minimal-lexical",
]
[[package]]
name = "nu-ansi-term"
version = "0.50.3"
@ -795,6 +896,15 @@ version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "51d515d32fb182ee37cda2ccdcb92950d6a3c2893aa280e540671c2cd0f3b1d9"
[[package]]
name = "num-traits"
version = "0.2.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "071dfc062690e90b734c0b2273ce72ad0ffa95f0c74596bc250dcfd960262841"
dependencies = [
"autocfg",
]
[[package]]
name = "objc2"
version = "0.6.3"
@ -945,6 +1055,12 @@ dependencies = [
"untrusted",
]
[[package]]
name = "rustversion"
version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b39cdef0fa800fc44525c84ccb54a029961a8215f9619753635a9c0d2538d46d"
[[package]]
name = "ryu"
version = "1.0.23"
@ -1483,7 +1599,9 @@ dependencies = [
"anyhow",
"askama",
"axum",
"chrono",
"clap",
"cron",
"ctrlc",
"libc",
"serde",
@ -1505,6 +1623,51 @@ version = "0.11.1+wasi-snapshot-preview1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ccf3ec651a847eb01de73ccad15eb7d99f80485de043efb2f370cd654f4ea44b"
[[package]]
name = "wasm-bindgen"
version = "0.2.108"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "64024a30ec1e37399cf85a7ffefebdb72205ca1c972291c51512360d90bd8566"
dependencies = [
"cfg-if",
"once_cell",
"rustversion",
"wasm-bindgen-macro",
"wasm-bindgen-shared",
]
[[package]]
name = "wasm-bindgen-macro"
version = "0.2.108"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "008b239d9c740232e71bd39e8ef6429d27097518b6b30bdf9086833bd5b6d608"
dependencies = [
"quote",
"wasm-bindgen-macro-support",
]
[[package]]
name = "wasm-bindgen-macro-support"
version = "0.2.108"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5256bae2d58f54820e6490f9839c49780dff84c65aeab9e772f15d5f0e913a55"
dependencies = [
"bumpalo",
"proc-macro2",
"quote",
"syn",
"wasm-bindgen-shared",
]
[[package]]
name = "wasm-bindgen-shared"
version = "0.2.108"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1f01b580c9ac74c8d8f0c0e4afb04eeef2acf145458e52c03845ee9cd23e3d12"
dependencies = [
"unicode-ident",
]
[[package]]
name = "webpki-roots"
version = "1.0.6"
@ -1514,12 +1677,65 @@ dependencies = [
"rustls-pki-types",
]
[[package]]
name = "windows-core"
version = "0.62.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b8e83a14d34d0623b51dce9581199302a221863196a1dde71a7663a4c2be9deb"
dependencies = [
"windows-implement",
"windows-interface",
"windows-link",
"windows-result",
"windows-strings",
]
[[package]]
name = "windows-implement"
version = "0.60.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "053e2e040ab57b9dc951b72c264860db7eb3b0200ba345b4e4c3b14f67855ddf"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "windows-interface"
version = "0.59.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3f316c4a2570ba26bbec722032c4099d8c8bc095efccdc15688708623367e358"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "windows-link"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f0805222e57f7521d6a62e36fa9163bc891acd422f971defe97d64e70d0a4fe5"
[[package]]
name = "windows-result"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7781fa89eaf60850ac3d2da7af8e5242a5ea78d1a11c49bf2910bb5a73853eb5"
dependencies = [
"windows-link",
]
[[package]]
name = "windows-strings"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7837d08f69c77cf6b07689544538e017c1bfcf57e34b4c0ff58e6c2cd3b37091"
dependencies = [
"windows-link",
]
[[package]]
name = "windows-sys"
version = "0.52.0"

View File

@ -21,3 +21,5 @@ tower-http = { version = "0.6", features = ["cors"] }
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
tracing-appender = "0.2"
cron = "0.12"
chrono = { version = "0.4", features = ["clock"] }

View File

@ -2,6 +2,11 @@ pub mod bwlimit;
pub mod cache;
pub mod config_init;
pub mod log;
pub mod preset;
pub mod reconnect;
pub mod setup;
pub mod speed_test;
pub mod status;
pub mod update;
pub mod warmup;
pub mod wifi; // TODO: WiFi AP setup

273
src/cli/preset.rs Normal file
View File

@ -0,0 +1,273 @@
//! `warpgate preset` — apply a usage preset to the current config.
//!
//! Presets are predefined parameter sets from PRD §9, optimized for
//! specific workloads: photographer (large RAW files), video (sequential
//! large files), or office (small files, frequent sync).
use std::path::Path;
use anyhow::Result;
use crate::config::Config;
#[derive(Debug, Clone, Copy)]
pub enum Preset {
Photographer,
Video,
Office,
}
impl std::str::FromStr for Preset {
type Err = anyhow::Error;
fn from_str(s: &str) -> std::result::Result<Self, Self::Err> {
match s {
"photographer" => Ok(Self::Photographer),
"video" => Ok(Self::Video),
"office" => Ok(Self::Office),
_ => Err(anyhow::anyhow!(
"Unknown preset '{}'. Use: photographer, video, office",
s
)),
}
}
}
impl Preset {
pub fn apply(&self, config: &mut Config) {
match self {
Self::Photographer => {
config.cache.max_size = "500G".into();
config.read.chunk_size = "256M".into();
config.read.chunk_limit = "1G".into();
config.read.read_ahead = "512M".into();
config.read.buffer_size = "256M".into();
config.read.multi_thread_streams = 4;
config.read.multi_thread_cutoff = "50M".into();
config.directory_cache.cache_time = "2h".into();
config.writeback.write_back = "5s".into();
config.writeback.transfers = 4;
config.protocols.enable_smb = true;
config.protocols.enable_nfs = false;
config.protocols.enable_webdav = false;
}
Self::Video => {
config.cache.max_size = "1T".into();
config.read.chunk_size = "512M".into();
config.read.chunk_limit = "2G".into();
config.read.read_ahead = "1G".into();
config.read.buffer_size = "512M".into();
config.read.multi_thread_streams = 2;
config.read.multi_thread_cutoff = "100M".into();
config.directory_cache.cache_time = "1h".into();
config.writeback.write_back = "5s".into();
config.writeback.transfers = 2;
config.protocols.enable_smb = true;
config.protocols.enable_nfs = false;
config.protocols.enable_webdav = false;
}
Self::Office => {
config.cache.max_size = "50G".into();
config.read.chunk_size = "64M".into();
config.read.chunk_limit = "256M".into();
config.read.read_ahead = "128M".into();
config.read.buffer_size = "128M".into();
config.read.multi_thread_streams = 4;
config.read.multi_thread_cutoff = "10M".into();
config.directory_cache.cache_time = "30m".into();
config.writeback.write_back = "5s".into();
config.writeback.transfers = 4;
config.protocols.enable_smb = true;
config.protocols.enable_nfs = false;
config.protocols.enable_webdav = true;
}
}
}
pub fn description(&self) -> &str {
match self {
Self::Photographer => "Large RAW file read performance (500G cache, 256M chunks)",
Self::Video => "Sequential read, large file prefetch (1T cache, 512M chunks)",
Self::Office => "Small file fast response, frequent sync (50G cache, 64M chunks)",
}
}
}
#[cfg(test)]
mod tests {
use super::*;
fn test_config() -> Config {
toml::from_str(
r#"
[[connections]]
name = "nas"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"
[read]
[bandwidth]
[writeback]
[directory_cache]
[protocols]
[[shares]]
name = "photos"
connection = "nas"
remote_path = "/photos"
mount_point = "/mnt/photos"
"#,
)
.unwrap()
}
// --- FromStr ---
#[test]
fn test_preset_parse_valid() {
assert!(matches!("photographer".parse::<Preset>(), Ok(Preset::Photographer)));
assert!(matches!("video".parse::<Preset>(), Ok(Preset::Video)));
assert!(matches!("office".parse::<Preset>(), Ok(Preset::Office)));
}
#[test]
fn test_preset_parse_invalid() {
assert!("unknown".parse::<Preset>().is_err());
assert!("".parse::<Preset>().is_err());
assert!("Photographer".parse::<Preset>().is_err()); // case-sensitive
assert!("OFFICE".parse::<Preset>().is_err());
}
#[test]
fn test_preset_parse_error_message() {
let err = "bad".parse::<Preset>().unwrap_err();
assert!(err.to_string().contains("bad"), "error should mention the bad value");
}
// --- Preset::apply — field values ---
#[test]
fn test_photographer_apply_all_fields() {
let mut cfg = test_config();
Preset::Photographer.apply(&mut cfg);
assert_eq!(cfg.cache.max_size, "500G");
assert_eq!(cfg.read.chunk_size, "256M");
assert_eq!(cfg.read.chunk_limit, "1G");
assert_eq!(cfg.read.read_ahead, "512M");
assert_eq!(cfg.read.buffer_size, "256M");
assert_eq!(cfg.read.multi_thread_streams, 4);
assert_eq!(cfg.read.multi_thread_cutoff, "50M");
assert_eq!(cfg.directory_cache.cache_time, "2h");
assert_eq!(cfg.writeback.write_back, "5s");
assert_eq!(cfg.writeback.transfers, 4);
assert!(cfg.protocols.enable_smb);
assert!(!cfg.protocols.enable_nfs);
assert!(!cfg.protocols.enable_webdav);
}
#[test]
fn test_video_apply_all_fields() {
let mut cfg = test_config();
Preset::Video.apply(&mut cfg);
assert_eq!(cfg.cache.max_size, "1T");
assert_eq!(cfg.read.chunk_size, "512M");
assert_eq!(cfg.read.chunk_limit, "2G");
assert_eq!(cfg.read.read_ahead, "1G");
assert_eq!(cfg.read.buffer_size, "512M");
assert_eq!(cfg.read.multi_thread_streams, 2);
assert_eq!(cfg.read.multi_thread_cutoff, "100M");
assert_eq!(cfg.directory_cache.cache_time, "1h");
assert_eq!(cfg.writeback.write_back, "5s");
assert_eq!(cfg.writeback.transfers, 2);
assert!(cfg.protocols.enable_smb);
assert!(!cfg.protocols.enable_nfs);
assert!(!cfg.protocols.enable_webdav);
}
#[test]
fn test_office_apply_all_fields() {
let mut cfg = test_config();
Preset::Office.apply(&mut cfg);
assert_eq!(cfg.cache.max_size, "50G");
assert_eq!(cfg.read.chunk_size, "64M");
assert_eq!(cfg.read.chunk_limit, "256M");
assert_eq!(cfg.read.read_ahead, "128M");
assert_eq!(cfg.read.buffer_size, "128M");
assert_eq!(cfg.read.multi_thread_streams, 4);
assert_eq!(cfg.read.multi_thread_cutoff, "10M");
assert_eq!(cfg.directory_cache.cache_time, "30m");
assert_eq!(cfg.writeback.write_back, "5s");
assert_eq!(cfg.writeback.transfers, 4);
assert!(cfg.protocols.enable_smb);
assert!(!cfg.protocols.enable_nfs);
assert!(cfg.protocols.enable_webdav);
}
#[test]
fn test_preset_does_not_change_connections_or_shares() {
let mut cfg = test_config();
Preset::Photographer.apply(&mut cfg);
// Preset must never touch connection or share settings
assert_eq!(cfg.connections[0].host, "10.0.0.1");
assert_eq!(cfg.connections[0].user(), "admin");
assert_eq!(cfg.shares[0].name, "photos");
assert_eq!(cfg.shares[0].remote_path, "/photos");
}
#[test]
fn test_preset_apply_is_idempotent() {
let mut cfg = test_config();
Preset::Video.apply(&mut cfg);
let snapshot_chunk = cfg.read.chunk_size.clone();
Preset::Video.apply(&mut cfg);
assert_eq!(cfg.read.chunk_size, snapshot_chunk);
}
#[test]
fn test_presets_have_consistent_write_back() {
// All three presets should use the same write_back value (plan §1 unified)
let mut cfg = test_config();
Preset::Photographer.apply(&mut cfg);
let wb_p = cfg.writeback.write_back.clone();
Preset::Video.apply(&mut cfg);
let wb_v = cfg.writeback.write_back.clone();
Preset::Office.apply(&mut cfg);
let wb_o = cfg.writeback.write_back.clone();
assert_eq!(wb_p, wb_v, "Photographer and Video write_back must match");
assert_eq!(wb_v, wb_o, "Video and Office write_back must match");
}
// --- description ---
#[test]
fn test_description_mentions_cache_size() {
assert!(Preset::Photographer.description().contains("500G"));
assert!(Preset::Video.description().contains("1T"));
assert!(Preset::Office.description().contains("50G"));
}
}
pub fn run(config: &mut Config, config_path: &Path, preset_name: &str) -> Result<()> {
let preset: Preset = preset_name.parse()?;
preset.apply(config);
let toml = config.to_commented_toml();
std::fs::write(config_path, toml)?;
println!(
"Applied preset '{}': {}",
preset_name,
preset.description()
);
println!("Config written to {}", config_path.display());
println!("Restart warpgate to apply changes: systemctl restart warpgate");
Ok(())
}

42
src/cli/reconnect.rs Normal file
View File

@ -0,0 +1,42 @@
//! `warpgate reconnect <share>` — re-probe and re-mount a single share.
//!
//! Sends a reconnect command to the running daemon via the web API.
//! Falls back to a direct probe if the daemon is not running.
use anyhow::Result;
use crate::config::Config;
use crate::daemon::DEFAULT_WEB_PORT;
use crate::rclone;
pub fn run(config: &Config, share_name: &str) -> Result<()> {
// Check share exists in config
let share = config
.find_share(share_name)
.ok_or_else(|| anyhow::anyhow!("Share '{}' not found in config", share_name))?;
// Try daemon API first
let url = format!(
"http://127.0.0.1:{}/api/reconnect/{}",
DEFAULT_WEB_PORT, share_name
);
match ureq::post(&url).send_json(serde_json::json!({})) {
Ok(resp) => {
let body: serde_json::Value = resp.into_body().read_json().unwrap_or_default();
if body["ok"].as_bool().unwrap_or(false) {
println!("Reconnecting share '{}'...", share_name);
println!("Check status with: warpgate status");
} else {
let msg = body["message"].as_str().unwrap_or("unknown error");
anyhow::bail!("Reconnect failed: {}", msg);
}
}
Err(_) => {
// Daemon not running — just probe directly
println!("Daemon not running. Testing direct probe...");
rclone::probe::probe_remote_path(config, share)?;
println!("Probe OK — start daemon with: systemctl start warpgate");
}
}
Ok(())
}

292
src/cli/setup.rs Normal file
View File

@ -0,0 +1,292 @@
//! `warpgate setup` — interactive wizard for first-time configuration.
//!
//! Walks the user through NAS connection details, share paths, cache settings,
//! and preset selection, then writes a ready-to-deploy config file.
use std::path::PathBuf;
use anyhow::Result;
use crate::cli::preset::Preset;
use crate::config::{
BandwidthConfig, CacheConfig, Config, ConnectionConfig, DirectoryCacheConfig, Endpoint,
LogConfig, ProtocolsConfig, ReadConfig, ShareConfig, SftpEndpoint, SmbEndpoint, WarmupConfig,
WritebackConfig,
};
use crate::rclone::probe::ConnParams;
fn prompt(question: &str, default: Option<&str>) -> String {
use std::io::Write;
if let Some(def) = default {
print!("{} [{}]: ", question, def);
} else {
print!("{}: ", question);
}
std::io::stdout().flush().unwrap();
let mut input = String::new();
std::io::stdin().read_line(&mut input).unwrap();
let trimmed = input.trim().to_string();
if trimmed.is_empty() {
default.map(|d| d.to_string()).unwrap_or_default()
} else {
trimmed
}
}
fn prompt_password(question: &str) -> String {
use std::io::Write;
print!("{}: ", question);
std::io::stdout().flush().unwrap();
let mut input = String::new();
std::io::stdin().read_line(&mut input).unwrap();
input.trim().to_string()
}
pub fn run(output: Option<PathBuf>) -> Result<()> {
// Welcome banner
println!();
println!("=== Warpgate Setup Wizard ===");
println!("Configure your SSD caching proxy for remote NAS access.");
println!();
// --- NAS Connection ---
println!("--- NAS Connection ---");
let nas_host = prompt("NAS hostname or IP (e.g. 100.64.0.1)", None);
if nas_host.is_empty() {
anyhow::bail!("NAS hostname is required");
}
let protocol_choice = prompt("Protocol (1=SFTP, 2=SMB)", Some("1"));
let is_smb = protocol_choice == "2";
let nas_user = if is_smb {
prompt("SMB username", Some("admin"))
} else {
prompt("SFTP username", Some("admin"))
};
let (nas_pass, nas_key_file, smb_domain, smb_share) = if is_smb {
let pass = prompt_password("SMB password (required)");
if pass.is_empty() {
anyhow::bail!("SMB password is required");
}
let domain = prompt("SMB domain (optional, press Enter to skip)", Some(""));
let share = prompt("SMB share name (e.g. photos)", None);
if share.is_empty() {
anyhow::bail!("SMB share name is required");
}
let domain_opt = if domain.is_empty() { None } else { Some(domain) };
(Some(pass), None, domain_opt, Some(share))
} else {
let auth_method = prompt("Auth method (1=password, 2=SSH key)", Some("1"));
match auth_method.as_str() {
"2" => {
let key = prompt("SSH private key path", Some("/root/.ssh/id_rsa"));
(None, Some(key), None, None)
}
_ => {
let pass = prompt_password("SFTP password");
if pass.is_empty() {
anyhow::bail!("Password is required");
}
(Some(pass), None, None, None)
}
}
};
let default_port = if is_smb { "445" } else { "22" };
let conn_port: u16 = prompt("Port", Some(default_port))
.parse()
.unwrap_or(if is_smb { 445 } else { 22 });
let conn_name = prompt("Connection name (alphanumeric)", Some("nas"));
// --- Shares ---
println!();
println!("--- Shares ---");
println!("Configure at least one share (remote path → local mount).");
let mut shares = Vec::new();
loop {
let idx = shares.len() + 1;
println!();
println!("Share #{idx}:");
let remote_path = if is_smb {
prompt(" Remote path within share (e.g. / or /subfolder)", Some("/"))
} else {
prompt(" NAS remote path (e.g. /volume1/photos)", None)
};
if remote_path.is_empty() {
if shares.is_empty() {
println!(" At least one share is required.");
continue;
}
break;
}
let default_name = remote_path
.rsplit('/')
.next()
.unwrap_or("share")
.to_string();
let share_name = prompt(" Share name", Some(&default_name));
let default_mount = format!("/mnt/{}", share_name);
let mount_point = prompt(" Local mount point", Some(&default_mount));
shares.push(ShareConfig {
name: share_name,
connection: conn_name.clone(),
remote_path,
mount_point: PathBuf::from(mount_point),
read_only: false,
dir_refresh_interval: None,
});
let more = prompt(" Add another share? (y/N)", Some("N"));
if !more.eq_ignore_ascii_case("y") {
break;
}
}
// --- Cache ---
println!();
println!("--- Cache Settings ---");
let cache_dir = prompt(
"Cache directory (SSD recommended)",
Some("/var/cache/warpgate"),
);
let cache_max_size = prompt("Max cache size", Some("200G"));
// --- Preset ---
println!();
println!("--- Usage Preset ---");
println!(" 1. Photographer — large RAW files, 500G cache");
println!(" 2. Video — sequential read, 1T cache");
println!(" 3. Office — small files, frequent sync, 50G cache");
let preset_choice = prompt("Select preset (1/2/3)", Some("1"));
let preset = match preset_choice.as_str() {
"2" => Preset::Video,
"3" => Preset::Office,
_ => Preset::Photographer,
};
// Build config with defaults, then apply preset
let endpoint = if is_smb {
Endpoint::Smb(SmbEndpoint {
user: nas_user,
pass: nas_pass,
domain: smb_domain,
port: conn_port,
share: smb_share.unwrap(),
})
} else {
Endpoint::Sftp(SftpEndpoint {
user: nas_user,
pass: nas_pass,
key_file: nas_key_file,
port: conn_port,
connections: 8,
})
};
let mut config = Config {
connections: vec![ConnectionConfig {
name: conn_name.clone(),
host: nas_host.clone(),
endpoint,
}],
cache: CacheConfig {
dir: PathBuf::from(&cache_dir),
max_size: cache_max_size,
max_age: "720h".into(),
min_free: "10G".into(),
},
read: ReadConfig {
chunk_size: "256M".into(),
chunk_limit: "1G".into(),
read_ahead: "512M".into(),
buffer_size: "256M".into(),
multi_thread_streams: 4,
multi_thread_cutoff: "50M".into(),
},
bandwidth: BandwidthConfig {
limit_up: "0".into(),
limit_down: "0".into(),
adaptive: true,
},
writeback: WritebackConfig {
write_back: "5s".into(),
transfers: 4,
},
directory_cache: DirectoryCacheConfig {
cache_time: "1h".into(),
},
protocols: ProtocolsConfig {
enable_smb: true,
enable_nfs: false,
enable_webdav: false,
nfs_allowed_network: "192.168.0.0/24".into(),
webdav_port: 8080,
},
warmup: WarmupConfig::default(),
smb_auth: Default::default(),
dir_refresh: Default::default(),
log: LogConfig::default(),
web: Default::default(),
notifications: Default::default(),
shares,
};
preset.apply(&mut config);
// --- Connection test (rclone-based, validates credentials + share) ---
println!();
println!("Testing connection to {}:{}...", nas_host, conn_port);
let test_params = if is_smb {
ConnParams::Smb {
host: nas_host.clone(),
user: config.connections[0].user().to_string(),
pass: config.connections[0].pass().map(String::from),
domain: match &config.connections[0].endpoint {
Endpoint::Smb(smb) => smb.domain.clone(),
_ => None,
},
port: conn_port,
share: match &config.connections[0].endpoint {
Endpoint::Smb(smb) => smb.share.clone(),
_ => String::new(),
},
}
} else {
ConnParams::Sftp {
host: nas_host.clone(),
user: config.connections[0].user().to_string(),
pass: config.connections[0].pass().map(String::from),
key_file: match &config.connections[0].endpoint {
Endpoint::Sftp(sftp) => sftp.key_file.clone(),
_ => None,
},
port: conn_port,
}
};
match crate::rclone::probe::test_connection(&test_params) {
Ok(()) => println!(" Connection OK (rclone verified)"),
Err(e) => anyhow::bail!(
"Connection test failed for {}:{} — {}\n\
Check host, credentials, and ensure rclone is installed.",
nas_host, conn_port, e
),
}
// --- Write config ---
let config_path = output.unwrap_or_else(|| PathBuf::from("/etc/warpgate/config.toml"));
if let Some(parent) = config_path.parent() {
std::fs::create_dir_all(parent)?;
}
let toml = config.to_commented_toml();
std::fs::write(&config_path, toml)?;
println!();
println!("Config written to {}", config_path.display());
println!();
println!("Next steps:");
println!(" warpgate deploy — install services and start Warpgate");
Ok(())
}

View File

@ -8,6 +8,7 @@ use anyhow::{Context, Result};
use crate::config::Config;
use crate::rclone::config as rclone_config;
use crate::rclone::path as rclone_path;
const TEST_SIZE: usize = 10 * 1024 * 1024; // 10 MiB
@ -15,10 +16,10 @@ pub fn run(config: &Config) -> Result<()> {
let tmp_local = std::env::temp_dir().join("warpgate-speedtest");
// Use the first share's connection and remote_path for the speed test
let share = &config.shares[0];
let remote_path = format!(
"{}:{}/.warpgate-speedtest",
share.connection, share.remote_path
);
let conn = config
.connection_for_share(share)
.context("Connection not found for first share")?;
let remote_path = rclone_path::rclone_remote_subpath(conn, share, ".warpgate-speedtest");
// Create a 10 MiB test file
println!("Creating 10 MiB test file...");

View File

@ -144,6 +144,16 @@ fn print_api_status(api: &ApiStatus) -> Result<()> {
println!("Errored: {} files", total_errored);
}
// "Safe to disconnect" indicator
if total_dirty == 0 && total_transfers == 0 {
println!("\n[OK] All synced — safe to disconnect");
} else {
println!(
"\n[!!] {} dirty files, {} active transfers — DO NOT disconnect",
total_dirty, total_transfers
);
}
Ok(())
}

64
src/cli/update.rs Normal file
View File

@ -0,0 +1,64 @@
//! `warpgate update` — check for newer versions of Warpgate.
//!
//! Queries the GitHub Releases API to compare the running version with the
//! latest published release and optionally prints installation instructions.
use anyhow::Result;
/// GitHub repository path (owner/repo).
const GITHUB_REPO: &str = "warpgate-project/warpgate";
/// Current version from Cargo.toml.
const CURRENT_VERSION: &str = env!("CARGO_PKG_VERSION");
pub fn run(apply: bool) -> Result<()> {
let api_url = format!(
"https://api.github.com/repos/{GITHUB_REPO}/releases/latest"
);
println!("Checking for updates...");
println!(" Current version: v{CURRENT_VERSION}");
let resp = ureq::get(&api_url)
.header("User-Agent", "warpgate-updater")
.call()
.map_err(|e| anyhow::anyhow!("Failed to reach GitHub API: {e}"))?;
let body: serde_json::Value = resp
.into_body()
.read_json()
.map_err(|e| anyhow::anyhow!("Failed to parse GitHub API response: {e}"))?;
let tag = body["tag_name"]
.as_str()
.unwrap_or("")
.trim_start_matches('v');
if tag.is_empty() {
anyhow::bail!("Could not determine latest version from GitHub API response");
}
if tag == CURRENT_VERSION {
println!(" Latest version: v{tag}");
println!("Already up to date (v{CURRENT_VERSION}).");
return Ok(());
}
println!(" Latest version: v{tag} ← new release available");
println!();
println!("Changelog: https://github.com/{GITHUB_REPO}/releases/tag/v{tag}");
if apply {
println!();
println!("To install the latest version, run:");
println!(
" curl -fsSL https://github.com/{GITHUB_REPO}/releases/download/v{tag}/warpgate-linux-x86_64 \\\n | sudo install -m 0755 /dev/stdin /usr/local/bin/warpgate"
);
println!(" sudo systemctl restart warpgate");
} else {
println!();
println!("Run `warpgate update --apply` to print the installation command.");
}
Ok(())
}

View File

@ -14,14 +14,18 @@ use tracing::{debug, info, warn};
use crate::config::Config;
use crate::daemon::{DaemonStatus, WarmupRuleState};
use crate::rclone::config as rclone_config;
use crate::rclone::path as rclone_path;
pub fn run(config: &Config, share_name: &str, path: &str, newer_than: Option<&str>) -> Result<()> {
let share = config
.find_share(share_name)
.with_context(|| format!("Share '{}' not found in config", share_name))?;
let conn = config
.connection_for_share(share)
.with_context(|| format!("Connection '{}' not found", share.connection))?;
let warmup_path = share.mount_point.join(path);
let remote_src = format!("{}:{}/{}", share.connection, share.remote_path, path);
let remote_src = rclone_path::rclone_remote_subpath(conn, share, path);
println!("Warming up: {remote_src}");
println!(" via mount: {}", warmup_path.display());
@ -69,8 +73,9 @@ pub fn run(config: &Config, share_name: &str, path: &str, newer_than: Option<&st
let mut skipped = 0usize;
let mut errors = 0usize;
let cache_prefix = rclone_path::vfs_cache_prefix(conn, share);
for file in &files {
if is_cached(config, &share.connection, &share.remote_path, path, file) {
if is_cached(config, &cache_prefix, path, file) {
skipped += 1;
continue;
}
@ -117,9 +122,12 @@ pub fn run_tracked(
let share = config
.find_share(share_name)
.with_context(|| format!("Share '{}' not found in config", share_name))?;
let conn = config
.connection_for_share(share)
.with_context(|| format!("Connection '{}' not found", share.connection))?;
let warmup_path = share.mount_point.join(path);
let remote_src = format!("{}:{}/{}", share.connection, share.remote_path, path);
let remote_src = rclone_path::rclone_remote_subpath(conn, share, path);
// Mark as Listing
{
@ -214,6 +222,7 @@ pub fn run_tracked(
}
info!(share = %share_name, path = %path, total, "warmup: caching started");
let cache_prefix = rclone_path::vfs_cache_prefix(conn, share);
for file in &files {
// Check shutdown / generation before each file
if shutdown.load(Ordering::SeqCst) {
@ -226,7 +235,7 @@ pub fn run_tracked(
}
}
if is_cached(config, &share.connection, &share.remote_path, path, file) {
if is_cached(config, &cache_prefix, path, file) {
let skipped = {
let mut status = shared_status.write().unwrap();
if let Some(rs) = status.warmup.get_mut(rule_index) {
@ -304,13 +313,15 @@ pub fn run_tracked(
}
/// Check if a file is already in the rclone VFS cache.
fn is_cached(config: &Config, connection: &str, remote_path: &str, warmup_path: &str, relative_path: &str) -> bool {
///
/// `cache_prefix` is the protocol-aware relative path from `rclone_path::vfs_cache_prefix`,
/// e.g. `nas/volume1/photos` (SFTP) or `office/photos/subfolder` (SMB).
fn is_cached(config: &Config, cache_prefix: &std::path::Path, warmup_path: &str, relative_path: &str) -> bool {
let cache_path = config
.cache
.dir
.join("vfs")
.join(connection)
.join(remote_path.trim_start_matches('/'))
.join(cache_prefix)
.join(warmup_path)
.join(relative_path);
cache_path.exists()
@ -319,14 +330,16 @@ fn is_cached(config: &Config, connection: &str, remote_path: &str, warmup_path:
#[cfg(test)]
mod tests {
use super::*;
use std::path::PathBuf;
fn test_config() -> Config {
toml::from_str(
r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/warpgate-test-cache"
@ -347,56 +360,85 @@ mount_point = "/mnt/photos"
.unwrap()
}
fn smb_config() -> Config {
toml::from_str(
r#"
[[connections]]
name = "office"
host = "192.168.1.100"
protocol = "smb"
user = "admin"
pass = "secret"
share = "data"
port = 445
[cache]
dir = "/tmp/warpgate-test-cache"
[read]
[bandwidth]
[writeback]
[directory_cache]
[protocols]
[[shares]]
name = "docs"
connection = "office"
remote_path = "/subfolder"
mount_point = "/mnt/docs"
"#,
)
.unwrap()
}
#[test]
fn test_is_cached_nonexistent_file() {
let config = test_config();
assert!(!is_cached(&config, "nas", "/photos", "2024", "IMG_001.jpg"));
let prefix = PathBuf::from("nas/photos");
assert!(!is_cached(&config, &prefix, "2024", "IMG_001.jpg"));
}
#[test]
fn test_is_cached_deep_path() {
let config = test_config();
assert!(!is_cached(&config, "nas", "/photos", "Images/2024/January", "photo.cr3"));
let prefix = PathBuf::from("nas/photos");
assert!(!is_cached(&config, &prefix, "Images/2024/January", "photo.cr3"));
}
#[test]
fn test_is_cached_path_construction() {
fn test_is_cached_sftp_path_construction() {
let config = test_config();
let expected = std::path::PathBuf::from("/tmp/warpgate-test-cache")
.join("vfs")
.join("nas")
.join("photos")
.join("2024")
.join("IMG_001.jpg");
let share = config.find_share("photos").unwrap();
let conn = config.connection_for_share(share).unwrap();
let prefix = rclone_path::vfs_cache_prefix(conn, share);
let cache_path = config
.cache
.dir
.join("vfs")
.join("nas")
.join("photos")
.join("2024")
.join("IMG_001.jpg");
let expected = PathBuf::from("/tmp/warpgate-test-cache/vfs/nas/photos/2024/IMG_001.jpg");
let cache_path = config.cache.dir.join("vfs").join(&prefix).join("2024").join("IMG_001.jpg");
assert_eq!(cache_path, expected);
}
#[test]
fn test_is_cached_smb_path_construction() {
let config = smb_config();
let share = config.find_share("docs").unwrap();
let conn = config.connection_for_share(share).unwrap();
let prefix = rclone_path::vfs_cache_prefix(conn, share);
// SMB: includes share name "data" before "subfolder"
let expected = PathBuf::from("/tmp/warpgate-test-cache/vfs/office/data/subfolder/2024/file.jpg");
let cache_path = config.cache.dir.join("vfs").join(&prefix).join("2024").join("file.jpg");
assert_eq!(cache_path, expected);
}
#[test]
fn test_is_cached_remote_path_trimming() {
let config = test_config();
let share = config.find_share("photos").unwrap();
let conn = config.connection_for_share(share).unwrap();
let prefix = rclone_path::vfs_cache_prefix(conn, share);
let connection = "home";
let remote_path = "/volume1/photos";
let cache_path = config
.cache
.dir
.join("vfs")
.join(connection)
.join(remote_path.trim_start_matches('/'))
.join("2024")
.join("file.jpg");
assert!(cache_path.to_string_lossy().contains("home/volume1/photos"));
assert!(!cache_path.to_string_lossy().contains("home//volume1"));
let cache_path = config.cache.dir.join("vfs").join(&prefix).join("2024").join("file.jpg");
assert!(cache_path.to_string_lossy().contains("nas/photos"));
assert!(!cache_path.to_string_lossy().contains("nas//photos"));
}
}

6
src/cli/wifi.rs Normal file
View File

@ -0,0 +1,6 @@
//! `warpgate setup-wifi` — WiFi AP + captive portal setup.
//!
//! TODO: WiFi AP setup (hostapd + dnsmasq + iptables).
//! Planned implementation: generate hostapd.conf, dnsmasq.conf, and iptables
//! rules to create a local WiFi AP that proxies client traffic through
//! the Warpgate cache layer.

View File

@ -33,6 +33,10 @@ pub struct Config {
pub dir_refresh: DirRefreshConfig,
#[serde(default)]
pub log: LogConfig,
#[serde(default)]
pub web: WebConfig,
#[serde(default)]
pub notifications: NotificationsConfig,
pub shares: Vec<ShareConfig>,
}
@ -64,27 +68,102 @@ fn default_log_level() -> String {
"info".into()
}
/// SFTP connection to a remote NAS.
/// Connection to a remote NAS (SFTP or SMB).
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct ConnectionConfig {
/// Unique name for this connection (used as rclone remote name).
pub name: String,
/// Remote NAS Tailscale IP or hostname.
pub nas_host: String,
/// SFTP username.
pub nas_user: String,
/// SFTP password (prefer key_file).
pub host: String,
/// Protocol-specific endpoint configuration.
#[serde(flatten)]
pub endpoint: Endpoint,
}
/// Protocol-specific endpoint configuration.
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
#[serde(tag = "protocol", rename_all = "lowercase")]
pub enum Endpoint {
Sftp(SftpEndpoint),
Smb(SmbEndpoint),
}
/// SFTP endpoint configuration.
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct SftpEndpoint {
pub user: String,
#[serde(default)]
pub nas_pass: Option<String>,
/// Path to SSH private key.
pub pass: Option<String>,
#[serde(default)]
pub nas_key_file: Option<String>,
/// SFTP port.
pub key_file: Option<String>,
#[serde(default = "default_sftp_port")]
pub sftp_port: u16,
/// SFTP connection pool size.
pub port: u16,
#[serde(default = "default_sftp_connections")]
pub sftp_connections: u32,
pub connections: u32,
}
/// SMB endpoint configuration.
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct SmbEndpoint {
pub user: String,
#[serde(default)]
pub pass: Option<String>,
#[serde(default)]
pub domain: Option<String>,
#[serde(default = "default_smb_port")]
pub port: u16,
/// Windows share name (used in rclone path, not in rclone.conf).
pub share: String,
}
impl ConnectionConfig {
/// Protocol name string ("sftp" or "smb").
pub fn protocol_name(&self) -> &str {
match &self.endpoint {
Endpoint::Sftp(_) => "sftp",
Endpoint::Smb(_) => "smb",
}
}
/// Username for this connection.
pub fn user(&self) -> &str {
match &self.endpoint {
Endpoint::Sftp(e) => &e.user,
Endpoint::Smb(e) => &e.user,
}
}
/// Password (if set).
pub fn pass(&self) -> Option<&str> {
match &self.endpoint {
Endpoint::Sftp(e) => e.pass.as_deref(),
Endpoint::Smb(e) => e.pass.as_deref(),
}
}
/// Port number.
pub fn port(&self) -> u16 {
match &self.endpoint {
Endpoint::Sftp(e) => e.port,
Endpoint::Smb(e) => e.port,
}
}
/// Get SFTP endpoint if this is an SFTP connection.
pub fn sftp(&self) -> Option<&SftpEndpoint> {
match &self.endpoint {
Endpoint::Sftp(e) => Some(e),
_ => None,
}
}
/// Get SMB endpoint if this is an SMB connection.
pub fn smb(&self) -> Option<&SmbEndpoint> {
match &self.endpoint {
Endpoint::Smb(e) => Some(e),
_ => None,
}
}
}
/// SSD cache settings.
@ -179,12 +258,56 @@ pub struct ProtocolsConfig {
pub webdav_port: u16,
}
/// Web UI configuration.
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct WebConfig {
/// Web UI password for HTTP Basic Auth. Empty = no auth (default).
#[serde(default)]
pub password: String,
}
/// Push notification configuration.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct NotificationsConfig {
/// Webhook URL for notifications (Telegram/Bark/DingTalk compatible). Empty = disabled.
#[serde(default)]
pub webhook_url: String,
/// Cache usage % threshold to trigger notification (default: 80).
#[serde(default = "default_notify_cache_threshold")]
pub cache_threshold_pct: u8,
/// Minutes NAS must be offline before notification (default: 5).
#[serde(default = "default_notify_offline_minutes")]
pub nas_offline_minutes: u64,
/// Write-back queue depth that triggers notification (default: 50).
#[serde(default = "default_notify_writeback_depth")]
pub writeback_depth: u64,
}
impl Default for NotificationsConfig {
fn default() -> Self {
Self {
webhook_url: String::new(),
cache_threshold_pct: default_notify_cache_threshold(),
nas_offline_minutes: default_notify_offline_minutes(),
writeback_depth: default_notify_writeback_depth(),
}
}
}
fn default_notify_cache_threshold() -> u8 { 80 }
fn default_notify_offline_minutes() -> u64 { 5 }
fn default_notify_writeback_depth() -> u64 { 50 }
/// Warmup configuration — auto-cache paths on startup.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct WarmupConfig {
/// Auto-warmup on startup (default: true when rules exist).
#[serde(default = "default_true")]
pub auto: bool,
/// Cron schedule for periodic cache warmup (e.g. "0 2 * * *" = 2am daily).
/// Empty = disabled (only runs on startup if auto=true).
#[serde(default)]
pub warmup_schedule: String,
/// Warmup rules — paths to pre-cache.
#[serde(default)]
pub rules: Vec<WarmupRule>,
@ -194,6 +317,7 @@ impl Default for WarmupConfig {
fn default() -> Self {
Self {
auto: true,
warmup_schedule: String::new(),
rules: Vec::new(),
}
}
@ -277,6 +401,9 @@ fn default_sftp_port() -> u16 {
fn default_sftp_connections() -> u32 {
8
}
fn default_smb_port() -> u16 {
445
}
fn default_cache_max_size() -> String {
"200G".into()
}
@ -400,16 +527,32 @@ impl Config {
for conn in &self.connections {
writeln!(out, "[[connections]]").unwrap();
writeln!(out, "name = {:?}", conn.name).unwrap();
writeln!(out, "nas_host = {:?}", conn.nas_host).unwrap();
writeln!(out, "nas_user = {:?}", conn.nas_user).unwrap();
if let Some(ref pass) = conn.nas_pass {
writeln!(out, "nas_pass = {:?}", pass).unwrap();
writeln!(out, "host = {:?}", conn.host).unwrap();
writeln!(out, "protocol = {:?}", conn.protocol_name()).unwrap();
match &conn.endpoint {
Endpoint::Sftp(sftp) => {
writeln!(out, "user = {:?}", sftp.user).unwrap();
if let Some(ref pass) = sftp.pass {
writeln!(out, "pass = {:?}", pass).unwrap();
}
if let Some(ref key) = sftp.key_file {
writeln!(out, "key_file = {:?}", key).unwrap();
}
writeln!(out, "port = {}", sftp.port).unwrap();
writeln!(out, "connections = {}", sftp.connections).unwrap();
}
Endpoint::Smb(smb) => {
writeln!(out, "user = {:?}", smb.user).unwrap();
if let Some(ref pass) = smb.pass {
writeln!(out, "pass = {:?}", pass).unwrap();
}
if let Some(ref domain) = smb.domain {
writeln!(out, "domain = {:?}", domain).unwrap();
}
writeln!(out, "port = {}", smb.port).unwrap();
writeln!(out, "share = {:?}", smb.share).unwrap();
}
}
if let Some(ref key) = conn.nas_key_file {
writeln!(out, "nas_key_file = {:?}", key).unwrap();
}
writeln!(out, "sftp_port = {}", conn.sftp_port).unwrap();
writeln!(out, "sftp_connections = {}", conn.sftp_connections).unwrap();
writeln!(out).unwrap();
}
@ -484,6 +627,23 @@ impl Config {
writeln!(out, "recursive = {}", self.dir_refresh.recursive).unwrap();
writeln!(out).unwrap();
// --- Web UI ---
writeln!(out, "# --- Web UI (change = no restart) ---").unwrap();
writeln!(out, "[web]").unwrap();
writeln!(out, "# password = \"your-password\" # Set to enable HTTP Basic Auth").unwrap();
writeln!(out, "password = {:?}", self.web.password).unwrap();
writeln!(out).unwrap();
// --- Notifications ---
writeln!(out, "# --- Notifications (change = no restart) ---").unwrap();
writeln!(out, "[notifications]").unwrap();
writeln!(out, "# webhook_url = \"https://api.telegram.org/bot<token>/sendMessage?chat_id=<id>\"").unwrap();
writeln!(out, "webhook_url = {:?}", self.notifications.webhook_url).unwrap();
writeln!(out, "cache_threshold_pct = {}", self.notifications.cache_threshold_pct).unwrap();
writeln!(out, "nas_offline_minutes = {}", self.notifications.nas_offline_minutes).unwrap();
writeln!(out, "writeback_depth = {}", self.notifications.writeback_depth).unwrap();
writeln!(out).unwrap();
// --- Shares ---
writeln!(out, "# --- Shares (change = per-share restart) ---").unwrap();
for share in &self.shares {
@ -512,6 +672,8 @@ impl Config {
writeln!(out, "# --- Warmup (change = no restart) ---").unwrap();
writeln!(out, "[warmup]").unwrap();
writeln!(out, "auto = {}", self.warmup.auto).unwrap();
writeln!(out, "# warmup_schedule = \"0 2 * * *\" # Nightly at 2am").unwrap();
writeln!(out, "warmup_schedule = {:?}", self.warmup.warmup_schedule).unwrap();
writeln!(out).unwrap();
for rule in &self.warmup.rules {
writeln!(out, "[[warmup.rules]]").unwrap();
@ -554,7 +716,7 @@ impl Config {
anyhow::bail!("At least one [[connections]] entry is required");
}
// Validate connection names
// Validate connection names and protocol-specific fields
let mut seen_conn_names = std::collections::HashSet::new();
for (i, conn) in self.connections.iter().enumerate() {
if conn.name.is_empty() {
@ -574,6 +736,35 @@ impl Config {
conn.name
);
}
if conn.host.is_empty() {
anyhow::bail!("connections[{}]: host must not be empty", i);
}
// Protocol-specific validation
match &conn.endpoint {
Endpoint::Sftp(sftp) => {
if sftp.user.is_empty() {
anyhow::bail!("connections[{}]: SFTP user must not be empty", i);
}
}
Endpoint::Smb(smb) => {
if smb.user.is_empty() {
anyhow::bail!("connections[{}]: SMB user must not be empty", i);
}
if smb.share.is_empty() {
anyhow::bail!("connections[{}]: SMB share must not be empty", i);
}
if smb.share.contains(['/', '\\', ':']) {
anyhow::bail!(
"connections[{}]: SMB share '{}' must not contain /, \\, or :",
i,
smb.share
);
}
if smb.pass.as_ref().map_or(true, |p| p.is_empty()) {
anyhow::bail!("connections[{}]: SMB password is required", i);
}
}
}
}
// At least one share required
@ -633,6 +824,11 @@ impl Config {
}
}
// Validate notification thresholds
if self.notifications.cache_threshold_pct > 100 {
anyhow::bail!("notifications.cache_threshold_pct must be 0100, got {}", self.notifications.cache_threshold_pct);
}
// Validate SMB auth
if self.smb_auth.enabled {
if self.smb_auth.username.is_none() {
@ -662,8 +858,9 @@ mod tests {
r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"
@ -688,12 +885,14 @@ mount_point = "/mnt/photos"
assert_eq!(config.connections.len(), 1);
assert_eq!(config.connections[0].name, "nas");
assert_eq!(config.connections[0].nas_host, "10.0.0.1");
assert_eq!(config.connections[0].nas_user, "admin");
assert_eq!(config.connections[0].sftp_port, 22);
assert_eq!(config.connections[0].sftp_connections, 8);
assert!(config.connections[0].nas_pass.is_none());
assert!(config.connections[0].nas_key_file.is_none());
assert_eq!(config.connections[0].host, "10.0.0.1");
assert_eq!(config.connections[0].protocol_name(), "sftp");
assert_eq!(config.connections[0].user(), "admin");
assert_eq!(config.connections[0].port(), 22);
let sftp = config.connections[0].sftp().unwrap();
assert_eq!(sftp.connections, 8);
assert!(sftp.pass.is_none());
assert!(sftp.key_file.is_none());
assert_eq!(config.cache.dir, PathBuf::from("/tmp/cache"));
assert_eq!(config.cache.max_size, "200G");
@ -737,12 +936,13 @@ mount_point = "/mnt/photos"
let toml_str = r#"
[[connections]]
name = "home"
nas_host = "192.168.1.100"
nas_user = "photographer"
nas_pass = "secret123"
nas_key_file = "/root/.ssh/id_rsa"
sftp_port = 2222
sftp_connections = 16
host = "192.168.1.100"
protocol = "sftp"
user = "photographer"
pass = "secret123"
key_file = "/root/.ssh/id_rsa"
port = 2222
connections = 16
[cache]
dir = "/mnt/ssd/cache"
@ -798,15 +998,14 @@ newer_than = "7d"
let config: Config = toml::from_str(toml_str).unwrap();
assert_eq!(config.connections[0].name, "home");
assert_eq!(config.connections[0].nas_host, "192.168.1.100");
assert_eq!(config.connections[0].nas_user, "photographer");
assert_eq!(config.connections[0].nas_pass.as_deref(), Some("secret123"));
assert_eq!(
config.connections[0].nas_key_file.as_deref(),
Some("/root/.ssh/id_rsa")
);
assert_eq!(config.connections[0].sftp_port, 2222);
assert_eq!(config.connections[0].sftp_connections, 16);
assert_eq!(config.connections[0].host, "192.168.1.100");
assert_eq!(config.connections[0].protocol_name(), "sftp");
assert_eq!(config.connections[0].user(), "photographer");
assert_eq!(config.connections[0].pass(), Some("secret123"));
let sftp = config.connections[0].sftp().unwrap();
assert_eq!(sftp.key_file.as_deref(), Some("/root/.ssh/id_rsa"));
assert_eq!(sftp.port, 2222);
assert_eq!(sftp.connections, 16);
assert_eq!(config.cache.max_size, "500G");
assert_eq!(config.cache.max_age, "1440h");
@ -845,16 +1044,18 @@ newer_than = "7d"
let toml_str = r#"
[[connections]]
name = "home"
nas_host = "10.0.0.1"
nas_user = "admin"
nas_key_file = "/root/.ssh/id_rsa"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
key_file = "/root/.ssh/id_rsa"
[[connections]]
name = "office"
nas_host = "192.168.1.100"
nas_user = "photographer"
nas_pass = "secret"
sftp_port = 2222
host = "192.168.1.100"
protocol = "sftp"
user = "photographer"
pass = "secret"
port = 2222
[cache]
dir = "/tmp/cache"
@ -883,7 +1084,7 @@ mount_point = "/mnt/projects"
assert_eq!(config.connections.len(), 2);
assert_eq!(config.connections[0].name, "home");
assert_eq!(config.connections[1].name, "office");
assert_eq!(config.connections[1].sftp_port, 2222);
assert_eq!(config.connections[1].port(), 2222);
assert_eq!(config.shares[0].connection, "home");
assert_eq!(config.shares[1].connection, "office");
@ -894,7 +1095,7 @@ mount_point = "/mnt/projects"
let share = &config.shares[0];
let conn = config.connection_for_share(share).unwrap();
assert_eq!(conn.nas_host, "10.0.0.1");
assert_eq!(conn.host, "10.0.0.1");
}
#[test]
@ -902,7 +1103,8 @@ mount_point = "/mnt/projects"
let toml_str = r#"
[[connections]]
name = "nas"
nas_user = "admin"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"
@ -939,9 +1141,10 @@ mount_point = "/mnt/photos"
let toml_str = r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
sftp_connections = 999
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
connections = 999
[cache]
dir = "/tmp/cache"
@ -960,7 +1163,7 @@ remote_path = "/photos"
mount_point = "/mnt/photos"
"#;
let config: Config = toml::from_str(toml_str).unwrap();
assert_eq!(config.connections[0].sftp_connections, 999);
assert_eq!(config.connections[0].sftp().unwrap().connections, 999);
assert_eq!(config.cache.max_size, "999T");
}
@ -969,8 +1172,9 @@ mount_point = "/mnt/photos"
let toml_str = r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[read]
[bandwidth]
@ -993,7 +1197,7 @@ mount_point = "/mnt/photos"
let config: Config = toml::from_str(minimal_toml()).unwrap();
let serialized = toml::to_string(&config).unwrap();
let config2: Config = toml::from_str(&serialized).unwrap();
assert_eq!(config.connections[0].nas_host, config2.connections[0].nas_host);
assert_eq!(config.connections[0].host, config2.connections[0].host);
assert_eq!(config.cache.max_size, config2.cache.max_size);
assert_eq!(config.writeback.transfers, config2.writeback.transfers);
}
@ -1006,7 +1210,7 @@ mount_point = "/mnt/photos"
let config2: Config = toml::from_str(&commented).unwrap();
config2.validate().unwrap();
assert_eq!(config.connections[0].name, config2.connections[0].name);
assert_eq!(config.connections[0].nas_host, config2.connections[0].nas_host);
assert_eq!(config.connections[0].host, config2.connections[0].host);
assert_eq!(config.cache.dir, config2.cache.dir);
assert_eq!(config.cache.max_size, config2.cache.max_size);
assert_eq!(config.read.chunk_size, config2.read.chunk_size);
@ -1024,12 +1228,13 @@ mount_point = "/mnt/photos"
let toml_str = r#"
[[connections]]
name = "home"
nas_host = "192.168.1.100"
nas_user = "photographer"
nas_pass = "secret123"
nas_key_file = "/root/.ssh/id_rsa"
sftp_port = 2222
sftp_connections = 16
host = "192.168.1.100"
protocol = "sftp"
user = "photographer"
pass = "secret123"
key_file = "/root/.ssh/id_rsa"
port = 2222
connections = 16
[cache]
dir = "/mnt/ssd/cache"
@ -1094,9 +1299,9 @@ newer_than = "7d"
config2.validate().unwrap();
// All fields should survive the round-trip
assert_eq!(config.connections[0].nas_pass, config2.connections[0].nas_pass);
assert_eq!(config.connections[0].nas_key_file, config2.connections[0].nas_key_file);
assert_eq!(config.connections[0].sftp_port, config2.connections[0].sftp_port);
assert_eq!(config.connections[0].pass(), config2.connections[0].pass());
assert_eq!(config.connections[0].sftp().unwrap().key_file, config2.connections[0].sftp().unwrap().key_file);
assert_eq!(config.connections[0].port(), config2.connections[0].port());
assert_eq!(config.smb_auth.enabled, config2.smb_auth.enabled);
assert_eq!(config.smb_auth.username, config2.smb_auth.username);
assert_eq!(config.smb_auth.smb_pass, config2.smb_auth.smb_pass);
@ -1246,9 +1451,10 @@ path = "Images/2024"
let toml_str = r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
nas_pass = "secret"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
pass = "secret"
[cache]
dir = "/tmp/cache"
@ -1324,8 +1530,9 @@ read_only = true
let toml_str = r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"
@ -1368,8 +1575,9 @@ mount_point = "/mnt/photos"
let toml_str = r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"
@ -1400,8 +1608,9 @@ mount_point = "/mnt/photos"
let toml_str = r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"
@ -1431,8 +1640,9 @@ mount_point = "/mnt/photos"
let toml_str = r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"
@ -1482,8 +1692,9 @@ mount_point = "/mnt/photos"
let toml_str = r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"
@ -1516,13 +1727,15 @@ mount_point = "/mnt/other"
let toml_str = r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[[connections]]
name = "nas"
nas_host = "10.0.0.2"
nas_user = "admin2"
host = "10.0.0.2"
protocol = "sftp"
user = "admin2"
[cache]
dir = "/tmp/cache"
@ -1549,8 +1762,9 @@ mount_point = "/mnt/photos"
let toml_str = r#"
[[connections]]
name = "my nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"
@ -1577,8 +1791,9 @@ mount_point = "/mnt/photos"
let toml_str = r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"
@ -1605,8 +1820,9 @@ mount_point = "/mnt/photos"
let toml_str = r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"
@ -1633,8 +1849,9 @@ mount_point = "/mnt/photos"
let toml_str = r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"
@ -1661,8 +1878,9 @@ mount_point = "/mnt/photos"
let toml_str = r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"
@ -1689,8 +1907,9 @@ mount_point = "mnt/photos"
let toml_str = r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"
@ -1723,8 +1942,9 @@ mount_point = "/mnt/data"
let toml_str = r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"
@ -1758,8 +1978,9 @@ path = "2024"
let toml_str = r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"
@ -1789,8 +2010,9 @@ mount_point = "/mnt/photos"
let toml_str = r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"
@ -1816,6 +2038,48 @@ mount_point = "/mnt/photos"
assert!(err.contains("username"), "got: {err}");
}
#[test]
fn test_validate_smb_share_illegal_chars() {
// Parse a valid SMB config, then mutate the share field to avoid TOML escaping issues.
let toml_str = r#"
[[connections]]
name = "nas"
host = "10.0.0.1"
protocol = "smb"
user = "admin"
pass = "secret"
share = "photos"
[cache]
dir = "/tmp/cache"
[read]
[bandwidth]
[writeback]
[directory_cache]
[protocols]
[[shares]]
name = "photos"
connection = "nas"
remote_path = "/"
mount_point = "/mnt/photos"
"#;
for bad_share in &["photos/raw", "photos\\raw", "photos:raw"] {
let mut config: Config = toml::from_str(toml_str).unwrap();
if let Endpoint::Smb(ref mut smb) = config.connections[0].endpoint {
smb.share = bad_share.to_string();
}
let err = config.validate().unwrap_err().to_string();
assert!(
err.contains("must not contain"),
"share='{}' should fail validation, got: {}",
bad_share,
err
);
}
}
#[test]
fn test_is_valid_remote_name() {
assert!(is_valid_remote_name("home"));
@ -1827,4 +2091,127 @@ mount_point = "/mnt/photos"
assert!(!is_valid_remote_name("nas:1"));
assert!(!is_valid_remote_name("nas/1"));
}
// -----------------------------------------------------------------------
// WebConfig
// -----------------------------------------------------------------------
#[test]
fn test_web_config_default_password_empty() {
let config: Config = toml::from_str(minimal_toml()).unwrap();
assert_eq!(config.web.password, "", "default web password should be empty");
}
#[test]
fn test_web_config_password_set() {
let toml_str = format!("{}\n[web]\npassword = \"s3cr3t\"", minimal_toml());
let config: Config = toml::from_str(&toml_str).unwrap();
assert_eq!(config.web.password, "s3cr3t");
}
#[test]
fn test_web_config_serialization_roundtrip() {
let toml_str = format!("{}\n[web]\npassword = \"mypass\"", minimal_toml());
let config: Config = toml::from_str(&toml_str).unwrap();
let serialized = config.to_commented_toml();
let config2: Config = toml::from_str(&serialized).unwrap();
assert_eq!(config.web.password, config2.web.password);
}
// -----------------------------------------------------------------------
// NotificationsConfig
// -----------------------------------------------------------------------
#[test]
fn test_notifications_config_defaults() {
let config: Config = toml::from_str(minimal_toml()).unwrap();
assert_eq!(config.notifications.webhook_url, "");
assert_eq!(config.notifications.cache_threshold_pct, 80);
assert_eq!(config.notifications.nas_offline_minutes, 5);
assert_eq!(config.notifications.writeback_depth, 50);
}
#[test]
fn test_notifications_config_all_fields() {
let toml_str = format!(
"{}\n[notifications]\nwebhook_url = \"https://hook.example.com\"\ncache_threshold_pct = 90\nnas_offline_minutes = 10\nwriteback_depth = 100",
minimal_toml()
);
let config: Config = toml::from_str(&toml_str).unwrap();
assert_eq!(config.notifications.webhook_url, "https://hook.example.com");
assert_eq!(config.notifications.cache_threshold_pct, 90);
assert_eq!(config.notifications.nas_offline_minutes, 10);
assert_eq!(config.notifications.writeback_depth, 100);
}
#[test]
fn test_notifications_config_partial_override_keeps_defaults() {
// Partial [notifications] section: only webhook_url set
let toml_str = format!(
"{}\n[notifications]\nwebhook_url = \"https://example.com\"",
minimal_toml()
);
let config: Config = toml::from_str(&toml_str).unwrap();
assert_eq!(config.notifications.webhook_url, "https://example.com");
assert_eq!(config.notifications.cache_threshold_pct, 80); // still default
assert_eq!(config.notifications.nas_offline_minutes, 5);
assert_eq!(config.notifications.writeback_depth, 50);
}
#[test]
fn test_notifications_config_serialization_roundtrip() {
let toml_str = format!(
"{}\n[notifications]\nwebhook_url = \"https://rt.test\"\ncache_threshold_pct = 70\nnas_offline_minutes = 3\nwriteback_depth = 25",
minimal_toml()
);
let config: Config = toml::from_str(&toml_str).unwrap();
let serialized = config.to_commented_toml();
let config2: Config = toml::from_str(&serialized).unwrap();
assert_eq!(config.notifications.webhook_url, config2.notifications.webhook_url);
assert_eq!(config.notifications.cache_threshold_pct, config2.notifications.cache_threshold_pct);
assert_eq!(config.notifications.nas_offline_minutes, config2.notifications.nas_offline_minutes);
assert_eq!(config.notifications.writeback_depth, config2.notifications.writeback_depth);
}
// -----------------------------------------------------------------------
// LogConfig
// -----------------------------------------------------------------------
#[test]
fn test_log_config_defaults() {
let config: Config = toml::from_str(minimal_toml()).unwrap();
assert_eq!(config.log.file, "/var/log/warpgate/warpgate.log");
assert_eq!(config.log.level, "info");
}
#[test]
fn test_log_config_custom_values() {
let toml_str = format!(
"{}\n[log]\nfile = \"/tmp/warpgate-test.log\"\nlevel = \"debug\"",
minimal_toml()
);
let config: Config = toml::from_str(&toml_str).unwrap();
assert_eq!(config.log.file, "/tmp/warpgate-test.log");
assert_eq!(config.log.level, "debug");
}
#[test]
fn test_log_config_empty_file_disables_file_logging() {
let toml_str = format!("{}\n[log]\nfile = \"\"", minimal_toml());
let config: Config = toml::from_str(&toml_str).unwrap();
assert_eq!(config.log.file, "", "empty file = no file logging");
}
#[test]
fn test_log_config_serialization_roundtrip() {
let toml_str = format!(
"{}\n[log]\nfile = \"/var/log/wg.log\"\nlevel = \"warn\"",
minimal_toml()
);
let config: Config = toml::from_str(&toml_str).unwrap();
let serialized = config.to_commented_toml();
let config2: Config = toml::from_str(&serialized).unwrap();
assert_eq!(config.log.file, config2.log.file);
assert_eq!(config.log.level, config2.log.level);
}
}

View File

@ -286,8 +286,9 @@ mod tests {
r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"
@ -396,7 +397,7 @@ mount_point = "/mnt/photos"
fn test_connection_modified_affects_shares() {
let old = minimal_config();
let mut new = old.clone();
new.connections[0].nas_host = "192.168.1.1".to_string();
new.connections[0].host = "192.168.1.1".to_string();
let d = diff(&old, &new);
assert_eq!(d.connections_modified, vec!["nas"]);
// Share "photos" references "nas", so it should be in shares_modified
@ -411,12 +412,14 @@ mount_point = "/mnt/photos"
let mut new = old.clone();
new.connections.push(crate::config::ConnectionConfig {
name: "office".to_string(),
nas_host: "10.0.0.2".to_string(),
nas_user: "admin".to_string(),
nas_pass: None,
nas_key_file: None,
sftp_port: 22,
sftp_connections: 8,
host: "10.0.0.2".to_string(),
endpoint: crate::config::Endpoint::Sftp(crate::config::SftpEndpoint {
user: "admin".to_string(),
pass: None,
key_file: None,
port: 22,
connections: 8,
}),
});
let d = diff(&old, &new);
assert_eq!(d.connections_added, vec!["office"]);
@ -429,13 +432,15 @@ mount_point = "/mnt/photos"
r#"
[[connections]]
name = "home"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[[connections]]
name = "office"
nas_host = "10.0.0.2"
nas_user = "admin"
host = "10.0.0.2"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"
@ -477,13 +482,15 @@ mount_point = "/mnt/projects"
r#"
[[connections]]
name = "home"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[[connections]]
name = "office"
nas_host = "10.0.0.2"
nas_user = "admin"
host = "10.0.0.2"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"

View File

@ -60,6 +60,18 @@ pub struct DaemonStatus {
pub dir_refresh_dirs_ok: HashMap<String, usize>,
/// Number of subdirectories that failed to refresh in the last cycle, keyed by share name.
pub dir_refresh_dirs_failed: HashMap<String, usize>,
/// Whether all NAS connections are currently unreachable.
pub nas_offline: bool,
/// Whether all write-back has completed (dirty_count=0, transfers=0).
pub all_synced: bool,
/// When NAS first went offline (for offline-duration notification).
pub nas_offline_since: Option<Instant>,
/// Whether we've already sent the NAS-offline notification (reset on reconnect).
pub nas_offline_notified: bool,
/// Cache warning level already notified (0=none, 3=writeback depth).
pub cache_notified_level: u8,
/// Whether we've already sent the cache-threshold notification (reset when usage drops).
pub cache_threshold_notified: bool,
}
impl DaemonStatus {
@ -93,6 +105,12 @@ impl DaemonStatus {
dir_refresh_gen_arc: Arc::new(AtomicU64::new(0)),
dir_refresh_dirs_ok: HashMap::new(),
dir_refresh_dirs_failed: HashMap::new(),
nas_offline: false,
all_synced: true,
nas_offline_since: None,
nas_offline_notified: false,
cache_notified_level: 0,
cache_threshold_notified: false,
}
}
@ -301,6 +319,8 @@ pub enum SupervisorCmd {
Shutdown,
/// Live bandwidth adjustment (Tier A — no restart needed).
BwLimit { up: String, down: String },
/// Reconnect (re-probe + re-mount) a single share by name.
Reconnect(String),
}
#[cfg(test)]

View File

@ -31,7 +31,26 @@ pub fn run(config: &Config) -> Result<()> {
println!("Generating rclone config...");
rclone::config::write_config(config)?;
// Step 5: Generate service configs based on protocol toggles
// Step 5: Test NAS connectivity for each share
println!("Testing NAS connectivity...");
for share in &config.shares {
print!(" Probing {}:{} ... ", share.connection, share.remote_path);
match rclone::probe::probe_remote_path(config, share) {
Ok(()) => println!("OK"),
Err(e) => {
println!("FAILED");
anyhow::bail!(
"NAS connection test failed for share '{}': {}\n\n\
Fix the connection settings in your config before deploying.",
share.name,
e
);
}
}
}
println!(" All shares reachable.");
// Step 6: Generate service configs based on protocol toggles
println!("Generating service configs...");
if config.protocols.enable_smb {
samba::write_config(config)?;
@ -49,11 +68,11 @@ pub fn run(config: &Config) -> Result<()> {
let _ = webdav::build_serve_command(config);
}
// Step 6: Install single warpgate.service unit (supervisor mode)
// Step 7: Install single warpgate.service unit (supervisor mode)
println!("Installing warpgate.service...");
systemd::install_run_unit(config)?;
// Step 7: Enable and start the unified service
// Step 8: Enable and start the unified service
println!("Starting warpgate service...");
systemd::enable_and_start_run()?;

View File

@ -84,14 +84,44 @@ enum Commands {
#[arg(short, long)]
output: Option<PathBuf>,
},
/// Apply a usage preset (photographer/video/office) to current config.
Preset {
/// Preset name: photographer, video, or office.
name: String,
},
/// Interactive setup wizard — configure Warpgate step by step.
Setup {
/// Output config file path.
#[arg(short, long)]
output: Option<PathBuf>,
},
/// Reconnect a share (re-probe + re-mount) without full restart.
Reconnect {
/// Share name to reconnect.
share: String,
},
/// Check for a newer version of Warpgate.
Update {
/// Download and print install instructions for the latest binary.
#[arg(long)]
apply: bool,
},
/// Set up a local WiFi AP + captive portal (requires hostapd + dnsmasq).
SetupWifi,
/// Clone a network interface MAC address for WiFi AP passthrough.
CloneMac {
/// Network interface to clone the MAC address from.
interface: String,
},
}
fn main() -> Result<()> {
let cli = Cli::parse();
match cli.command {
// config-init doesn't need an existing config file
// config-init and setup don't need an existing config file
Commands::ConfigInit { output } => cli::config_init::run(output),
Commands::Setup { output } => cli::setup::run(output),
// deploy loads config if it exists, or generates one
Commands::Deploy => {
let config = load_config_or_default(&cli.config)?;
@ -119,8 +149,21 @@ fn main() -> Result<()> {
}
Commands::Log { lines, follow } => cli::log::run(&config, lines, follow),
Commands::SpeedTest => cli::speed_test::run(&config),
Commands::Preset { name } => {
let mut config = config;
cli::preset::run(&mut config, &cli.config, &name)
}
Commands::Reconnect { share } => cli::reconnect::run(&config, &share),
Commands::Update { apply } => cli::update::run(apply),
Commands::SetupWifi => {
todo!("WiFi AP setup not yet implemented — see src/cli/wifi.rs")
}
Commands::CloneMac { .. } => {
todo!("MAC clone not yet implemented — see src/cli/wifi.rs")
}
// already handled above
Commands::Run | Commands::ConfigInit { .. } | Commands::Deploy => unreachable!(),
Commands::Run | Commands::ConfigInit { .. } | Commands::Deploy
| Commands::Setup { .. } => unreachable!(),
}
}
}

View File

@ -10,33 +10,54 @@ use crate::config::Config;
/// Default path for generated rclone config.
pub const RCLONE_CONF_PATH: &str = "/etc/warpgate/rclone.conf";
/// Generate rclone.conf content with one SFTP remote section per connection.
/// Generate rclone.conf content with one remote section per connection.
///
/// Each connection produces an INI-style `[name]` section (where `name` is
/// `ConnectionConfig.name`) containing all SFTP parameters.
/// `ConnectionConfig.name`) containing all protocol-specific parameters.
pub fn generate(config: &Config) -> Result<String> {
use crate::config::Endpoint;
let mut conf = String::new();
for conn in &config.connections {
writeln!(conf, "[{}]", conn.name)?;
writeln!(conf, "type = sftp")?;
writeln!(conf, "host = {}", conn.nas_host)?;
writeln!(conf, "user = {}", conn.nas_user)?;
writeln!(conf, "port = {}", conn.sftp_port)?;
if let Some(pass) = &conn.nas_pass {
let obscured = obscure_password(pass)?;
writeln!(conf, "pass = {obscured}")?;
}
if let Some(key_file) = &conn.nas_key_file {
writeln!(conf, "key_file = {key_file}")?;
match &conn.endpoint {
Endpoint::Sftp(sftp) => {
writeln!(conf, "type = sftp")?;
writeln!(conf, "host = {}", conn.host)?;
writeln!(conf, "user = {}", sftp.user)?;
writeln!(conf, "port = {}", sftp.port)?;
if let Some(pass) = &sftp.pass {
let obscured = obscure_password(pass)?;
writeln!(conf, "pass = {obscured}")?;
}
if let Some(key_file) = &sftp.key_file {
writeln!(conf, "key_file = {key_file}")?;
}
writeln!(conf, "connections = {}", sftp.connections)?;
// Disable hash checking — many NAS SFTP servers (e.g. Synology) don't support
// running shell commands like md5sum, causing upload verification to fail.
writeln!(conf, "disable_hashcheck = true")?;
}
Endpoint::Smb(smb) => {
writeln!(conf, "type = smb")?;
writeln!(conf, "host = {}", conn.host)?;
writeln!(conf, "user = {}", smb.user)?;
writeln!(conf, "port = {}", smb.port)?;
if let Some(pass) = &smb.pass {
let obscured = obscure_password(pass)?;
writeln!(conf, "pass = {obscured}")?;
}
if let Some(domain) = &smb.domain {
writeln!(conf, "domain = {domain}")?;
}
}
}
writeln!(conf, "connections = {}", conn.sftp_connections)?;
// Disable hash checking — many NAS SFTP servers (e.g. Synology) don't support
// running shell commands like md5sum, causing upload verification to fail.
writeln!(conf, "disable_hashcheck = true")?;
writeln!(conf)?; // blank line between sections
}
@ -44,7 +65,7 @@ pub fn generate(config: &Config) -> Result<String> {
}
/// Obscure a password using `rclone obscure` (required for rclone.conf).
fn obscure_password(plain: &str) -> Result<String> {
pub(crate) fn obscure_password(plain: &str) -> Result<String> {
let output = std::process::Command::new("rclone")
.args(["obscure", plain])
.output()
@ -83,8 +104,9 @@ mod tests {
r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"
@ -125,7 +147,9 @@ mount_point = "/mnt/photos"
#[test]
fn test_generate_rclone_config_with_key_file() {
let mut config = test_config();
config.connections[0].nas_key_file = Some("/root/.ssh/id_rsa".into());
if let crate::config::Endpoint::Sftp(ref mut sftp) = config.connections[0].endpoint {
sftp.key_file = Some("/root/.ssh/id_rsa".into());
}
let content = generate(&config).unwrap();
assert!(content.contains("key_file = /root/.ssh/id_rsa"));
@ -134,8 +158,10 @@ mount_point = "/mnt/photos"
#[test]
fn test_generate_rclone_config_custom_port_and_connections() {
let mut config = test_config();
config.connections[0].sftp_port = 2222;
config.connections[0].sftp_connections = 16;
if let crate::config::Endpoint::Sftp(ref mut sftp) = config.connections[0].endpoint {
sftp.port = 2222;
sftp.connections = 16;
}
let content = generate(&config).unwrap();
assert!(content.contains("port = 2222"));
@ -160,14 +186,16 @@ mount_point = "/mnt/photos"
r#"
[[connections]]
name = "home"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[[connections]]
name = "office"
nas_host = "192.168.1.100"
nas_user = "photographer"
sftp_port = 2222
host = "192.168.1.100"
protocol = "sftp"
user = "photographer"
port = 2222
[cache]
dir = "/tmp/cache"
@ -205,4 +233,50 @@ mount_point = "/mnt/projects"
assert!(content.contains("user = photographer"));
assert!(content.contains("port = 2222"));
}
#[test]
fn test_generate_smb_connection() {
// Note: no password to avoid requiring `rclone obscure` in test env.
// generate() doesn't call validate(), so missing password is fine here.
let config: Config = toml::from_str(
r#"
[[connections]]
name = "office"
host = "192.168.1.100"
protocol = "smb"
user = "photographer"
share = "photos"
[cache]
dir = "/tmp/cache"
[read]
[bandwidth]
[writeback]
[directory_cache]
[protocols]
[[shares]]
name = "photos"
connection = "office"
remote_path = "/subfolder"
mount_point = "/mnt/photos"
"#,
)
.unwrap();
let content = generate(&config).unwrap();
assert!(content.contains("[office]"));
assert!(content.contains("type = smb"));
assert!(content.contains("host = 192.168.1.100"));
assert!(content.contains("user = photographer"));
assert!(content.contains("port = 445"));
// Should NOT contain SFTP-specific fields
assert!(!content.contains("connections ="));
assert!(!content.contains("disable_hashcheck"));
assert!(!content.contains("key_file"));
// Should NOT contain password line (no pass set)
assert!(!content.contains("pass ="));
}
}

View File

@ -1,4 +1,5 @@
pub mod config;
pub mod mount;
pub mod path;
pub mod probe;
pub mod rc;

View File

@ -16,7 +16,13 @@ pub fn build_mount_args(config: &Config, share: &ShareConfig, rc_port: u16) -> V
// Subcommand and source:dest
args.push("mount".into());
args.push(format!("{}:{}", share.connection, share.remote_path));
let source = if let Some(conn) = config.connection_for_share(share) {
super::path::rclone_remote_path(conn, share)
} else {
// Fallback if connection not found (shouldn't happen with validated config)
format!("{}:{}", share.connection, share.remote_path)
};
args.push(source);
args.push(share.mount_point.display().to_string());
// Point to our generated rclone.conf
@ -169,8 +175,9 @@ mod tests {
r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"

309
src/rclone/path.rs Normal file
View File

@ -0,0 +1,309 @@
//! Path resolution for rclone remotes across protocols.
//!
//! SFTP and SMB have different rclone path semantics:
//!
//! | Operation | SFTP | SMB |
//! |-----------|--------------------|-----------------------------|
//! | Mount | `conn:/vol/photos` | `conn:sharename/subfolder` |
//! | Test | `conn:/` | `conn:sharename/` |
//! | Browse | `conn:/path` | `conn:sharename/path` |
use std::path::PathBuf;
use crate::config::{ConnectionConfig, Endpoint, ShareConfig};
/// Low-level: build an SMB rclone path with share prefix.
///
/// Joins `remote_name:share/path`, stripping any leading `/` from `path`.
/// Used by both `ConnectionConfig`-based and `ConnParams`-based callers
/// so the SMB path rule lives in exactly one place.
pub(crate) fn smb_remote(remote_name: &str, share: &str, path: &str) -> String {
let relative = path.trim_start_matches('/');
if relative.is_empty() {
format!("{}:{}", remote_name, share)
} else {
format!("{}:{}/{}", remote_name, share, relative)
}
}
/// Build the rclone remote path for mounting a share.
///
/// - SFTP: `connection:remote_path` (e.g. `nas:/volume1/photos`)
/// - SMB: `connection:share/relative_path` (e.g. `office:photos/subfolder`)
///
/// For SMB, the share's `remote_path` is treated as relative within `SmbEndpoint.share`.
/// A leading `/` is stripped.
pub fn rclone_remote_path(conn: &ConnectionConfig, share: &ShareConfig) -> String {
match &conn.endpoint {
Endpoint::Sftp(_) => {
format!("{}:{}", share.connection, share.remote_path)
}
Endpoint::Smb(smb) => smb_remote(&share.connection, &smb.share, &share.remote_path),
}
}
/// Build an rclone path for a sub-path within a share (warmup, speed-test, etc.).
///
/// Appends `subpath` to the base `rclone_remote_path`.
/// - SFTP: `nas:/volume1/photos/2024`
/// - SMB: `office:photos/subfolder/2024`
pub fn rclone_remote_subpath(
conn: &ConnectionConfig,
share: &ShareConfig,
subpath: &str,
) -> String {
let base = rclone_remote_path(conn, share);
let subpath = subpath.trim_matches('/');
if subpath.is_empty() {
base
} else {
// Trim trailing '/' from base to avoid double-slash (e.g. "nas:/" + "foo" → "nas:/foo")
let base = base.trim_end_matches('/');
format!("{}/{}", base, subpath)
}
}
/// Return the relative directory under the rclone VFS cache for a share.
///
/// rclone stores cached files at `{cache_dir}/vfs/{connection}/{path}`.
/// - SFTP `nas:/volume1/photos` → `nas/volume1/photos`
/// - SMB `office:photos/subfolder` → `office/photos/subfolder`
pub fn vfs_cache_prefix(conn: &ConnectionConfig, share: &ShareConfig) -> PathBuf {
let remote = rclone_remote_path(conn, share);
// remote is "name:path" — split on first ':'
let (name, path) = remote.split_once(':').unwrap_or((&remote, ""));
let path = path.trim_start_matches('/');
PathBuf::from(name).join(path)
}
/// Build the rclone remote path for testing connectivity (list root).
///
/// - SFTP: `connection:/`
/// - SMB: `connection:share/`
pub fn rclone_test_path(conn: &ConnectionConfig) -> String {
match &conn.endpoint {
Endpoint::Sftp(_) => {
format!("{}:/", conn.name)
}
Endpoint::Smb(smb) => {
format!("{}:{}/", conn.name, smb.share)
}
}
}
/// Build the rclone remote path for browsing directories.
///
/// - SFTP: `connection:path`
/// - SMB: `connection:share/path`
pub fn rclone_browse_path(conn: &ConnectionConfig, path: &str) -> String {
match &conn.endpoint {
Endpoint::Sftp(_) => {
format!("{}:{}", conn.name, path)
}
Endpoint::Smb(smb) => smb_remote(&conn.name, &smb.share, path),
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::config::{Endpoint, SftpEndpoint, SmbEndpoint};
fn sftp_conn() -> ConnectionConfig {
ConnectionConfig {
name: "nas".into(),
host: "10.0.0.1".into(),
endpoint: Endpoint::Sftp(SftpEndpoint {
user: "admin".into(),
pass: None,
key_file: None,
port: 22,
connections: 8,
}),
}
}
fn smb_conn() -> ConnectionConfig {
ConnectionConfig {
name: "office".into(),
host: "192.168.1.100".into(),
endpoint: Endpoint::Smb(SmbEndpoint {
user: "photographer".into(),
pass: Some("secret".into()),
domain: None,
port: 445,
share: "photos".into(),
}),
}
}
fn share(conn_name: &str, remote_path: &str) -> ShareConfig {
ShareConfig {
name: "test".into(),
connection: conn_name.into(),
remote_path: remote_path.into(),
mount_point: "/mnt/test".into(),
read_only: false,
dir_refresh_interval: None,
}
}
// --- rclone_remote_path ---
#[test]
fn test_sftp_remote_path() {
let conn = sftp_conn();
let s = share("nas", "/volume1/photos");
assert_eq!(rclone_remote_path(&conn, &s), "nas:/volume1/photos");
}
#[test]
fn test_sftp_remote_path_root() {
let conn = sftp_conn();
let s = share("nas", "/");
assert_eq!(rclone_remote_path(&conn, &s), "nas:/");
}
#[test]
fn test_smb_remote_path() {
let conn = smb_conn();
let s = share("office", "/subfolder");
assert_eq!(rclone_remote_path(&conn, &s), "office:photos/subfolder");
}
#[test]
fn test_smb_remote_path_root() {
let conn = smb_conn();
let s = share("office", "/");
assert_eq!(rclone_remote_path(&conn, &s), "office:photos");
}
#[test]
fn test_smb_remote_path_nested() {
let conn = smb_conn();
let s = share("office", "/2024/wedding");
assert_eq!(
rclone_remote_path(&conn, &s),
"office:photos/2024/wedding"
);
}
// --- rclone_test_path ---
#[test]
fn test_sftp_test_path() {
assert_eq!(rclone_test_path(&sftp_conn()), "nas:/");
}
#[test]
fn test_smb_test_path() {
assert_eq!(rclone_test_path(&smb_conn()), "office:photos/");
}
// --- rclone_browse_path ---
#[test]
fn test_sftp_browse_path() {
assert_eq!(rclone_browse_path(&sftp_conn(), "/volume1"), "nas:/volume1");
}
#[test]
fn test_sftp_browse_path_root() {
assert_eq!(rclone_browse_path(&sftp_conn(), "/"), "nas:/");
}
#[test]
fn test_smb_browse_path() {
assert_eq!(
rclone_browse_path(&smb_conn(), "/subfolder"),
"office:photos/subfolder"
);
}
#[test]
fn test_smb_browse_path_root() {
assert_eq!(rclone_browse_path(&smb_conn(), "/"), "office:photos");
}
// --- rclone_remote_subpath ---
#[test]
fn test_sftp_subpath() {
let conn = sftp_conn();
let s = share("nas", "/volume1/photos");
assert_eq!(
rclone_remote_subpath(&conn, &s, "2024"),
"nas:/volume1/photos/2024"
);
}
#[test]
fn test_sftp_subpath_empty() {
let conn = sftp_conn();
let s = share("nas", "/volume1/photos");
assert_eq!(
rclone_remote_subpath(&conn, &s, ""),
"nas:/volume1/photos"
);
}
#[test]
fn test_sftp_subpath_root_share_no_double_slash() {
// When remote_path is "/", base is "nas:/" — must not produce "nas://foo"
let conn = sftp_conn();
let s = share("nas", "/");
assert_eq!(rclone_remote_subpath(&conn, &s, "foo"), "nas:/foo");
}
#[test]
fn test_smb_subpath() {
let conn = smb_conn();
let s = share("office", "/subfolder");
assert_eq!(
rclone_remote_subpath(&conn, &s, "2024"),
"office:photos/subfolder/2024"
);
}
#[test]
fn test_smb_subpath_root_share() {
let conn = smb_conn();
let s = share("office", "/");
assert_eq!(
rclone_remote_subpath(&conn, &s, "2024"),
"office:photos/2024"
);
}
// --- vfs_cache_prefix ---
#[test]
fn test_sftp_vfs_cache_prefix() {
let conn = sftp_conn();
let s = share("nas", "/volume1/photos");
assert_eq!(
vfs_cache_prefix(&conn, &s),
std::path::PathBuf::from("nas/volume1/photos")
);
}
#[test]
fn test_smb_vfs_cache_prefix() {
let conn = smb_conn();
let s = share("office", "/subfolder");
assert_eq!(
vfs_cache_prefix(&conn, &s),
std::path::PathBuf::from("office/photos/subfolder")
);
}
#[test]
fn test_smb_vfs_cache_prefix_root() {
let conn = smb_conn();
let s = share("office", "/");
assert_eq!(
vfs_cache_prefix(&conn, &s),
std::path::PathBuf::from("office/photos")
);
}
}

View File

@ -4,13 +4,14 @@
//! This prevents rclone from mounting a FUSE filesystem that silently fails
//! when clients try to access it.
use std::fmt::Write as FmtWrite;
use std::process::Command;
use std::time::Duration;
use anyhow::{Context, Result};
use crate::config::{Config, ShareConfig};
use crate::rclone::config::RCLONE_CONF_PATH;
use crate::rclone::config::{obscure_password, RCLONE_CONF_PATH};
/// Probe timeout per share.
const PROBE_TIMEOUT: Duration = Duration::from_secs(10);
@ -20,8 +21,12 @@ const PROBE_TIMEOUT: Duration = Duration::from_secs(10);
/// Runs: `rclone lsf <connection>:<remote_path> --max-depth 1 --config <rclone.conf>`
///
/// Returns `Ok(())` if the directory exists, `Err` with a descriptive message if not.
pub fn probe_remote_path(_config: &Config, share: &ShareConfig) -> Result<()> {
let remote = format!("{}:{}", share.connection, share.remote_path);
pub fn probe_remote_path(config: &Config, share: &ShareConfig) -> Result<()> {
let remote = if let Some(conn) = config.connection_for_share(share) {
super::path::rclone_remote_path(conn, share)
} else {
format!("{}:{}", share.connection, share.remote_path)
};
let mut child = Command::new("rclone")
.args([
@ -84,6 +89,209 @@ pub fn probe_remote_path(_config: &Config, share: &ShareConfig) -> Result<()> {
}
}
/// Parameters for an ad-hoc connection (used by test and browse).
pub enum ConnParams {
Sftp {
host: String,
user: String,
pass: Option<String>,
key_file: Option<String>,
port: u16,
},
Smb {
host: String,
user: String,
pass: Option<String>,
domain: Option<String>,
port: u16,
share: String,
},
}
/// A temporary file that is deleted when dropped.
struct TempConf {
path: String,
}
impl TempConf {
fn path(&self) -> &str {
&self.path
}
}
impl Drop for TempConf {
fn drop(&mut self) {
let _ = std::fs::remove_file(&self.path);
}
}
/// Write a temporary rclone config with a single remote named `remote_name`.
fn write_temp_rclone_conf(params: &ConnParams, remote_name: &str) -> Result<TempConf> {
let mut conf = String::new();
writeln!(conf, "[{remote_name}]").unwrap();
match params {
ConnParams::Sftp { host, user, pass, key_file, port } => {
writeln!(conf, "type = sftp").unwrap();
writeln!(conf, "host = {host}").unwrap();
writeln!(conf, "user = {user}").unwrap();
writeln!(conf, "port = {port}").unwrap();
if let Some(pass) = pass {
if !pass.is_empty() {
let obscured = obscure_password(pass)?;
writeln!(conf, "pass = {obscured}").unwrap();
}
}
if let Some(key_file) = key_file {
if !key_file.is_empty() {
writeln!(conf, "key_file = {key_file}").unwrap();
}
}
writeln!(conf, "disable_hashcheck = true").unwrap();
}
ConnParams::Smb { host, user, pass, domain, port, .. } => {
writeln!(conf, "type = smb").unwrap();
writeln!(conf, "host = {host}").unwrap();
writeln!(conf, "user = {user}").unwrap();
writeln!(conf, "port = {port}").unwrap();
if let Some(pass) = pass {
if !pass.is_empty() {
let obscured = obscure_password(pass)?;
writeln!(conf, "pass = {obscured}").unwrap();
}
}
if let Some(domain) = domain {
if !domain.is_empty() {
writeln!(conf, "domain = {domain}").unwrap();
}
}
}
}
let uid = uuid_short();
let path = format!("/tmp/wg-test-{uid}.conf");
std::fs::write(&path, conf.as_bytes())
.with_context(|| format!("Failed to write temp rclone config: {path}"))?;
Ok(TempConf { path })
}
/// Run `rclone lsf` with a timeout, returning stdout on success or an error message.
fn run_rclone_lsf(args: &[&str], timeout: Duration) -> Result<String> {
let mut child = Command::new("rclone")
.args(args)
.stdout(std::process::Stdio::piped())
.stderr(std::process::Stdio::piped())
.spawn()
.context("Failed to spawn rclone")?;
let deadline = std::time::Instant::now() + timeout;
loop {
match child.try_wait() {
Ok(Some(status)) => {
if status.success() {
let stdout = if let Some(mut out) = child.stdout.take() {
let mut buf = String::new();
std::io::Read::read_to_string(&mut out, &mut buf).unwrap_or(0);
buf
} else {
String::new()
};
return Ok(stdout);
}
let stderr = if let Some(mut err) = child.stderr.take() {
let mut buf = String::new();
std::io::Read::read_to_string(&mut err, &mut buf).unwrap_or(0);
buf
} else {
String::new()
};
let msg = stderr.trim();
if msg.is_empty() {
anyhow::bail!("rclone exited with code {}", status.code().unwrap_or(-1));
} else {
anyhow::bail!("{}", extract_rclone_error(msg));
}
}
Ok(None) => {
if std::time::Instant::now() > deadline {
let _ = child.kill();
let _ = child.wait();
anyhow::bail!("timed out after {}s", timeout.as_secs());
}
std::thread::sleep(Duration::from_millis(100));
}
Err(e) => anyhow::bail!("failed to poll rclone: {e}"),
}
}
}
/// Test whether a connection is reachable by listing the root directory.
///
/// Returns `Ok(())` if rclone can connect and list, `Err` with an error message if not.
/// - SFTP: lists `remote:/`
/// - SMB: lists `remote:share/`
pub fn test_connection(params: &ConnParams) -> Result<()> {
let remote_name = format!("wg-test-{}", uuid_short());
let tmp = write_temp_rclone_conf(params, &remote_name)?;
let conf_path = tmp.path().to_string();
let remote = match params {
ConnParams::Sftp { .. } => format!("{remote_name}:/"),
ConnParams::Smb { share, .. } => format!("{remote_name}:{share}/"),
};
run_rclone_lsf(
&["lsf", &remote, "--max-depth", "1", "--config", &conf_path],
PROBE_TIMEOUT,
)?;
Ok(())
}
/// List subdirectories at `path` on the remote, returning their names (without trailing `/`).
///
/// For SMB, the path is relative to the share name.
pub fn browse_dirs(params: &ConnParams, path: &str) -> Result<Vec<String>> {
let remote_name = format!("wg-test-{}", uuid_short());
let tmp = write_temp_rclone_conf(params, &remote_name)?;
let conf_path = tmp.path().to_string();
let remote = match params {
ConnParams::Sftp { .. } => format!("{remote_name}:{path}"),
ConnParams::Smb { share, .. } => super::path::smb_remote(&remote_name, share, path),
};
let stdout = run_rclone_lsf(
&[
"lsf",
&remote,
"--max-depth",
"1",
"--dirs-only",
"--config",
&conf_path,
],
PROBE_TIMEOUT,
)?;
let dirs = stdout
.lines()
.map(|l| l.trim_end_matches('/').to_string())
.filter(|l| !l.is_empty())
.collect();
Ok(dirs)
}
/// Generate a short random hex string for unique naming (no external rand dependency).
fn uuid_short() -> String {
use std::time::{SystemTime, UNIX_EPOCH};
let nanos = SystemTime::now()
.duration_since(UNIX_EPOCH)
.map(|d| d.subsec_nanos())
.unwrap_or(0);
let pid = std::process::id();
format!("{:08x}{:08x}", pid, nanos)
}
/// Extract the most useful part of rclone's error output.
///
/// rclone stderr often contains timestamps and log levels; we strip those

View File

@ -60,8 +60,9 @@ mod tests {
r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"
@ -87,8 +88,9 @@ mount_point = "/mnt/photos"
r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"

View File

@ -177,8 +177,9 @@ mod tests {
r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"
@ -204,8 +205,9 @@ mount_point = "/mnt/photos"
r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"
@ -244,8 +246,9 @@ read_only = true
r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"

View File

@ -71,8 +71,9 @@ mod tests {
r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"

View File

@ -35,8 +35,9 @@ mod tests {
r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"
@ -85,8 +86,9 @@ mount_point = "/mnt/photos"
r#"
[[connections]]
name = "nas"
nas_host = "10.0.0.1"
nas_user = "admin"
host = "10.0.0.1"
protocol = "sftp"
user = "admin"
[cache]
dir = "/tmp/cache"

View File

@ -4,7 +4,7 @@
//! process tree with coordinated startup and shutdown. Spawns a built-in web
//! server for status monitoring and config hot-reload.
use std::collections::HashMap;
use std::collections::{HashMap, VecDeque};
use std::os::unix::process::CommandExt;
use std::path::PathBuf;
use std::process::{Child, Command};
@ -15,6 +15,9 @@ use std::thread;
use std::time::{Duration, Instant, SystemTime};
use anyhow::{Context, Result};
use chrono::Utc;
use cron::Schedule;
use std::str::FromStr;
use tracing::{error, info, warn};
use crate::config::Config;
@ -47,6 +50,8 @@ const STATS_SNAPSHOT_INTERVAL: Duration = Duration::from_secs(60);
const CACHE_WARN_THRESHOLD: f64 = 0.80;
/// Cache usage CRIT threshold.
const CACHE_CRITICAL_THRESHOLD: f64 = 0.95;
/// Number of speed samples in the adaptive bandwidth sliding window.
const ADAPTIVE_WINDOW_SIZE: usize = 6;
/// Per-share state from the previous poll cycle, used for change detection.
struct SharePrevState {
@ -695,6 +700,15 @@ fn spawn_webdav(config: &Config) -> Result<Child> {
.context("Failed to spawn rclone serve webdav")
}
/// Send a notification via webhook (fire-and-forget, logs on error).
fn send_webhook_notification(url: &str, message: &str) {
if url.is_empty() { return; }
let body = serde_json::json!({ "text": message, "message": message });
if let Err(e) = ureq::post(url).send_json(&body) {
warn!("Notification webhook failed: {}", e);
}
}
/// Main supervision loop with command channel.
///
/// Uses `recv_timeout` on the command channel so it can both respond to
@ -715,6 +729,9 @@ fn supervise(
let mut webdav_tracker = RestartTracker::new();
let mut prev_states: HashMap<String, SharePrevState> = HashMap::new();
let mut last_stats_snapshot = Instant::now();
let mut last_scheduled_warmup: Option<Instant> = None;
let mut adaptive_window: VecDeque<u64> = VecDeque::with_capacity(ADAPTIVE_WINDOW_SIZE);
let mut adaptive_current_limit: u64 = 0;
loop {
// Check for commands (non-blocking with timeout = POLL_INTERVAL)
@ -727,6 +744,26 @@ fn supervise(
info!(bw_limit_up = %up, bw_limit_down = %down, "bandwidth limit applied");
apply_bwlimit(mounts, &up, &down);
}
Ok(SupervisorCmd::Reconnect(share_name)) => {
info!("Reconnect requested for share '{}'", share_name);
if let Some(_idx) = shared_config.read().unwrap().shares.iter().position(|s| s.name == share_name) {
// Kill existing mount if running
if let Some(pos) = mounts.iter().position(|m| m.name == share_name) {
let mut m = mounts.remove(pos);
let _ = m.child.kill();
let _ = m.child.wait();
}
// Reset health so it gets re-probed and re-mounted on next poll
let mut status = shared_status.write().unwrap();
if let Some(s) = status.shares.iter_mut().find(|s| s.name == share_name) {
s.mounted = false;
s.health = crate::daemon::ShareHealth::Pending;
}
info!("Share '{}' reset for reconnect", share_name);
} else {
warn!("Reconnect: share '{}' not found", share_name);
}
}
Ok(SupervisorCmd::Reload(new_config)) => {
info!("Config reload requested...");
handle_reload(
@ -839,6 +876,163 @@ fn supervise(
// Update shared status with fresh RC stats
update_status(shared_status, mounts, protocols, &config);
// Compute nas_offline and all_synced, then check notifications
{
let mut status = shared_status.write().unwrap();
// nas_offline: true when ALL shares are either not mounted or have failed health
let all_failed = status.shares.iter().all(|s| !s.mounted || matches!(s.health, ShareHealth::Failed(_)));
let any_mounted = status.shares.iter().any(|s| s.mounted);
let nas_offline = !any_mounted || (any_mounted && all_failed);
// all_synced: true when dirty_count=0 and transfers=0 across all shares
let total_dirty: u64 = status.shares.iter().map(|s| s.dirty_count).sum();
let total_transfers: u64 = status.shares.iter().map(|s| s.transfers).sum();
let all_synced = total_dirty == 0 && total_transfers == 0;
status.nas_offline = nas_offline;
status.all_synced = all_synced;
// Check notifications
let notif = config.notifications.clone();
if !notif.webhook_url.is_empty() {
let url = notif.webhook_url.clone();
// NAS offline notification
if status.nas_offline {
if status.nas_offline_since.is_none() {
status.nas_offline_since = Some(Instant::now());
}
let elapsed_mins = status.nas_offline_since
.map(|t| t.elapsed().as_secs() / 60)
.unwrap_or(0);
if elapsed_mins >= notif.nas_offline_minutes && !status.nas_offline_notified {
send_webhook_notification(&url, &format!(
"\u{26a0}\u{fe0f} Warpgate: NAS has been offline for {} minutes. Writes are queued locally.", elapsed_mins
));
status.nas_offline_notified = true;
}
} else {
status.nas_offline_since = None;
status.nas_offline_notified = false;
}
// Cache usage % notification
if notif.cache_threshold_pct > 0 {
let total_cache: u64 = status.shares.iter().map(|s| s.cache_bytes).sum();
if let Some(max_bytes) = parse_size_bytes(&config.cache.max_size) {
let pct = (total_cache as f64 / max_bytes as f64 * 100.0) as u8;
if pct >= notif.cache_threshold_pct && !status.cache_threshold_notified {
send_webhook_notification(&url, &format!(
"\u{26a0}\u{fe0f} Warpgate: cache usage {}% — consider cleaning", pct
));
status.cache_threshold_notified = true;
} else if pct < notif.cache_threshold_pct.saturating_sub(5) {
// Hysteresis: reset when usage drops 5% below threshold
status.cache_threshold_notified = false;
}
}
}
// Write-back depth notification
if total_dirty >= notif.writeback_depth {
if status.cache_notified_level < 3 {
send_webhook_notification(&url, &format!(
"\u{26a0}\u{fe0f} Warpgate: {} files pending write-back to NAS.", total_dirty
));
status.cache_notified_level = 3;
}
} else if status.cache_notified_level == 3 {
status.cache_notified_level = 0;
}
}
}
// Scheduled warmup check
{
let cfg = shared_config.read().unwrap();
let schedule = cfg.warmup.warmup_schedule.clone();
if !schedule.is_empty() && !cfg.warmup.rules.is_empty() {
let should_run = {
let normalized = normalize_cron_schedule(&schedule);
match Schedule::from_str(&normalized) {
Ok(sched) => match last_scheduled_warmup {
None => {
// First check: fire if the next scheduled time is within 60 seconds
sched.upcoming(Utc).next()
.map(|t| {
let diff = t.timestamp() - Utc::now().timestamp();
diff >= 0 && diff <= 60
})
.unwrap_or(false)
}
Some(last) => {
// Has run before: check if there's a scheduled time between
// last run and now
let elapsed_secs = last.elapsed().as_secs() as i64;
let last_dt = Utc::now() - chrono::Duration::seconds(elapsed_secs);
sched.after(&last_dt).next()
.map(|t| t <= Utc::now())
.unwrap_or(false)
}
},
Err(e) => {
warn!("Invalid warmup_schedule '{}': {}", schedule, e);
false
}
}
};
if should_run {
info!("Scheduled warmup triggered (schedule: {})", schedule);
last_scheduled_warmup = Some(Instant::now());
let cfg_clone = cfg.clone();
drop(cfg);
spawn_warmup(&cfg_clone, shared_status, &shutdown);
}
}
}
// Adaptive bandwidth throttling
{
let cfg = shared_config.read().unwrap();
if cfg.bandwidth.adaptive {
let max_limit = parse_size_bytes(&cfg.bandwidth.limit_up).unwrap_or(0);
if max_limit > 0 {
let total_speed: u64 = {
let status = shared_status.read().unwrap();
status.shares.iter().map(|s| s.speed as u64).sum()
};
adaptive_window.push_back(total_speed);
if adaptive_window.len() > ADAPTIVE_WINDOW_SIZE {
adaptive_window.pop_front();
}
if adaptive_window.len() >= ADAPTIVE_WINDOW_SIZE {
let window_slice: Vec<u64> = adaptive_window.iter().copied().collect();
let effective_current = if adaptive_current_limit == 0 { max_limit } else { adaptive_current_limit };
let new_limit = compute_adaptive_limit(
&window_slice,
adaptive_current_limit,
max_limit,
);
if new_limit != effective_current {
let limit_str = format!("{}k", new_limit / 1024);
info!(
adaptive_limit = %limit_str,
"Adaptive bwlimit adjusted"
);
adaptive_current_limit = new_limit;
apply_bwlimit(mounts, &limit_str, &cfg.bandwidth.limit_down);
}
}
}
} else if adaptive_current_limit != 0 {
// Adaptive was turned off: restore the configured limit
adaptive_current_limit = 0;
apply_bwlimit(mounts, &cfg.bandwidth.limit_up, &cfg.bandwidth.limit_down);
}
}
// Log cache state changes and periodic snapshots
log_cache_events(shared_status, &config, &mut prev_states, &mut last_stats_snapshot);
@ -1104,6 +1298,18 @@ fn handle_reload(
*cfg = new_config.clone();
}
// Collect affected share names from the diff for health recalculation.
// For Tier D (global), all shares are affected; for Tier C (per-share),
// only modified/added shares need fresh health.
let affected_shares: std::collections::HashSet<&str> = if diff.global_changed {
new_config.shares.iter().map(|s| s.name.as_str()).collect()
} else {
diff.shares_modified.iter()
.chain(diff.shares_added.iter())
.map(|s| s.as_str())
.collect()
};
// Update shared status with new share list
{
let mut status = shared_status.write().unwrap();
@ -1112,11 +1318,15 @@ fn handle_reload(
.iter()
.enumerate()
.map(|(i, s)| {
// Preserve existing stats if share still exists
let existing = status.shares.iter().find(|ss| ss.name == s.name);
let is_affected = affected_shares.contains(s.name.as_str());
crate::daemon::ShareStatus {
name: s.name.clone(),
mounted: existing.map(|e| e.mounted).unwrap_or(false),
mounted: if is_affected {
mounts.iter().any(|mc| mc.name == s.name)
} else {
existing.map(|e| e.mounted).unwrap_or(false)
},
rc_port: new_config.rc_port(i),
cache_bytes: existing.map(|e| e.cache_bytes).unwrap_or(0),
dirty_count: existing.map(|e| e.dirty_count).unwrap_or(0),
@ -1124,16 +1334,19 @@ fn handle_reload(
speed: existing.map(|e| e.speed).unwrap_or(0.0),
transfers: existing.map(|e| e.transfers).unwrap_or(0),
errors: existing.map(|e| e.errors).unwrap_or(0),
health: existing
.map(|e| e.health.clone())
.unwrap_or_else(|| {
// New share: if mount succeeded, it's healthy
if mounts.iter().any(|mc| mc.name == s.name) {
ShareHealth::Healthy
} else {
ShareHealth::Pending
}
}),
health: if is_affected {
// Recalculate health based on mount success
if mounts.iter().any(|mc| mc.name == s.name) {
ShareHealth::Healthy
} else {
ShareHealth::Failed("Mount failed after reload".into())
}
} else {
// Unaffected share: preserve existing health
existing
.map(|e| e.health.clone())
.unwrap_or(ShareHealth::Pending)
},
}
})
.collect();
@ -1503,6 +1716,52 @@ fn log_cache_events(
}
}
/// Convert a standard 5-field cron expression to the 7-field format expected
/// by the `cron` crate ("sec min hour dom month dow year").
///
/// - 5 fields ("min hour dom month dow") → prepend "0 " (sec=0), append " *" (year=any)
/// - 6 fields (already has sec) → append " *" (year=any)
/// - 7 fields → unchanged
fn normalize_cron_schedule(expr: &str) -> String {
let fields: Vec<&str> = expr.split_whitespace().collect();
match fields.len() {
5 => format!("0 {} *", expr),
6 => format!("{} *", expr),
_ => expr.to_string(),
}
}
/// Compute the new adaptive bandwidth limit from a window of speed samples.
///
/// - `window`: recent aggregate upload speed samples in bytes/sec (must be non-empty)
/// - `current_limit`: last applied limit (0 = "use `max_limit` as baseline")
/// - `max_limit`: configured upper bound in bytes/sec (0 = unlimited → passthrough)
///
/// Returns the new limit to apply (bytes/sec).
fn compute_adaptive_limit(window: &[u64], current_limit: u64, max_limit: u64) -> u64 {
if max_limit == 0 || window.is_empty() {
return current_limit;
}
let current = if current_limit == 0 { max_limit } else { current_limit };
let n = window.len() as f64;
let mean = window.iter().sum::<u64>() as f64 / n;
let variance = window.iter()
.map(|&x| { let d = x as f64 - mean; d * d })
.sum::<f64>() / n;
let std_dev = variance.sqrt();
if mean > 0.0 && std_dev / mean > 0.3 {
// Congested (high coefficient of variation): reduce 25%, floor at 1 MiB/s
((current as f64 * 0.75) as u64).max(1024 * 1024)
} else if mean >= current as f64 * 0.9 {
// Stable and near limit: maintain
current
} else {
// Stable but under-utilizing: increase 10%, cap at max
((current as f64 * 1.1) as u64).min(max_limit)
}
}
/// Parse a human-readable size string (e.g. "200G", "1.5T", "512M") into bytes.
fn parse_size_bytes(s: &str) -> Option<u64> {
let s = s.trim();
@ -1607,4 +1866,129 @@ mod tests {
assert_eq!(parse_size_bytes("200GB"), Some(200 * 1024 * 1024 * 1024));
assert_eq!(parse_size_bytes("bogus"), None);
}
// -----------------------------------------------------------------------
// normalize_cron_schedule
// -----------------------------------------------------------------------
#[test]
fn test_normalize_cron_5field() {
// Standard cron "min hour dom month dow" → prepend "0 " (sec=0), append " *" (year=any)
assert_eq!(normalize_cron_schedule("0 2 * * *"), "0 0 2 * * * *");
}
#[test]
fn test_normalize_cron_5field_wildcard_min() {
// Every 5 minutes
assert_eq!(normalize_cron_schedule("*/5 * * * *"), "0 */5 * * * * *");
}
#[test]
fn test_normalize_cron_6field() {
// 6-field (already has seconds) → append " *" for year
assert_eq!(normalize_cron_schedule("0 0 2 * * *"), "0 0 2 * * * *");
}
#[test]
fn test_normalize_cron_7field() {
// Already 7 fields → unchanged
assert_eq!(normalize_cron_schedule("0 0 2 * * * *"), "0 0 2 * * * *");
}
#[test]
fn test_normalize_cron_7field_unchanged_complex() {
let expr = "0 30 9,12 1,15 May-Aug Mon,Wed *";
assert_eq!(normalize_cron_schedule(expr), expr);
}
// -----------------------------------------------------------------------
// compute_adaptive_limit
// -----------------------------------------------------------------------
const MIB: u64 = 1024 * 1024;
#[test]
fn test_adaptive_window_size_constant() {
assert_eq!(ADAPTIVE_WINDOW_SIZE, 6);
}
#[test]
fn test_compute_adaptive_limit_congested_reduces_25pct() {
// Alternating 1M/5M → mean=3M, std_dev=2M, cv=0.67 > 0.3 → congested
let window = vec![MIB, 5 * MIB, MIB, 5 * MIB, MIB, 5 * MIB];
let max = 10 * MIB;
let current = 10 * MIB;
let new = compute_adaptive_limit(&window, current, max);
assert_eq!(new, ((10 * MIB) as f64 * 0.75) as u64);
assert!(new < current);
}
#[test]
fn test_compute_adaptive_limit_congested_floor_at_1mib() {
// Very noisy but current is near floor — must not go below 1 MiB/s
let window = vec![100, MIB, 100, MIB, 100, MIB];
let max = 10 * MIB;
let current = (MIB as f64 * 1.1) as u64; // slightly above floor
let new = compute_adaptive_limit(&window, current, max);
assert!(new >= MIB, "floor violated: {new} < {MIB}");
}
#[test]
fn test_compute_adaptive_limit_stable_near_max_maintains() {
// All samples ≥ 90% of limit → maintain
let limit = 10 * MIB;
let window = vec![9_500_000, 9_600_000, 9_700_000, 9_800_000, 9_900_000, 10_000_000];
let new = compute_adaptive_limit(&window, limit, limit);
assert_eq!(new, limit);
}
#[test]
fn test_compute_adaptive_limit_under_utilizing_increases_10pct() {
// mean=3M, current=5M → 3M < 5M*0.9=4.5M → under-utilizing → +10%
let max = 10 * MIB;
let current = 5 * MIB;
let window = vec![
2_800_000, 3_000_000, 3_200_000,
2_900_000, 3_100_000, 3_000_000,
];
let new = compute_adaptive_limit(&window, current, max);
assert_eq!(new, (current as f64 * 1.1) as u64);
assert!(new > current);
}
#[test]
fn test_compute_adaptive_limit_increase_capped_at_max() {
// current near max — 10% increase would exceed max, should be capped
let max = 10 * MIB;
let current = 9_500_000u64; // 9.5 MiB; +10% = 10.45 MiB > max
let window = vec![3_000_000; 6]; // under-utilizing
let new = compute_adaptive_limit(&window, current, max);
assert!(new <= max, "cap violated: {new} > {max}");
}
#[test]
fn test_compute_adaptive_limit_zero_current_uses_max_as_baseline() {
// current=0 means "baseline = max_limit"
let max = 10 * MIB;
// Under-utilizing from max baseline → +10%, capped at max
let window = vec![3_000_000; 6];
let new = compute_adaptive_limit(&window, 0, max);
assert!(new <= max);
// (10M * 1.1).min(10M) = 10M
assert_eq!(new, max);
}
#[test]
fn test_compute_adaptive_limit_zero_max_passthrough() {
// max=0 means unlimited — function returns current unchanged
let window = vec![MIB; 6];
let new = compute_adaptive_limit(&window, 5 * MIB, 0);
assert_eq!(new, 5 * MIB);
}
#[test]
fn test_compute_adaptive_limit_empty_window_passthrough() {
let new = compute_adaptive_limit(&[], 5 * MIB, 10 * MIB);
assert_eq!(new, 5 * MIB);
}
}

View File

@ -9,6 +9,7 @@ use axum::response::Json;
use axum::routing::{get, post};
use axum::Router;
use serde::Serialize;
use tokio::time::{timeout, Duration};
use crate::config::Config;
use crate::daemon::SupervisorCmd;
@ -22,6 +23,10 @@ pub fn routes() -> Router<SharedState> {
.route("/api/config", post(post_config))
.route("/api/bwlimit", post(post_bwlimit))
.route("/api/logs", get(get_logs))
.route("/api/reconnect/{share}", post(reconnect_share))
.route("/api/preset/{profile}", post(post_preset))
.route("/api/test-connection", post(post_test_connection))
.route("/api/browse", post(post_browse))
}
/// GET /api/status — overall daemon status.
@ -33,6 +38,8 @@ struct StatusResponse {
webdav_running: bool,
nfs_exported: bool,
warmup: Vec<WarmupRuleStatusResponse>,
nas_offline: bool,
all_synced: bool,
}
#[derive(Serialize)]
@ -153,6 +160,8 @@ async fn get_status(State(state): State<SharedState>) -> Json<StatusResponse> {
}
})
.collect(),
nas_offline: status.nas_offline,
all_synced: status.all_synced,
})
}
@ -311,6 +320,40 @@ async fn post_bwlimit(
}
}
/// POST /api/preset/{profile} — apply a configuration preset.
async fn post_preset(
State(state): State<SharedState>,
Path(profile): Path<String>,
) -> axum::response::Response {
use axum::response::IntoResponse;
let preset = match profile.parse::<crate::cli::preset::Preset>() {
Ok(p) => p,
Err(e) => return (StatusCode::BAD_REQUEST, e.to_string()).into_response(),
};
let mut config = {
let cfg = state.config.read().unwrap();
cfg.clone()
};
preset.apply(&mut config);
let toml_content = config.to_commented_toml();
if let Err(e) = std::fs::write(&state.config_path, &toml_content) {
return format!("<span class='error'>保存失败: {e}</span>").into_response();
}
if let Err(e) = state
.cmd_tx
.send(SupervisorCmd::Reload(config))
{
return format!("<span class='error'>重载失败: {e}</span>").into_response();
}
format!("<span class='ok'>✓ 已应用「{profile}」预设,配置重新加载中...</span>").into_response()
}
/// GET /api/logs?lines=200&from_line=0 — recent log file entries.
#[derive(serde::Deserialize)]
struct LogsQuery {
@ -380,3 +423,133 @@ async fn get_logs(
entries,
})
}
/// POST /api/test-connection — verify credentials can connect.
///
/// Accepts a connection object. The `name` field is optional (ignored by the probe).
#[derive(serde::Deserialize)]
struct TestConnRequest {
host: String,
#[serde(flatten)]
endpoint: crate::config::Endpoint,
}
#[derive(Serialize)]
struct TestConnResponse {
ok: bool,
message: String,
}
/// Convert a test/browse request into ConnParams for probe functions.
fn req_to_params(host: &str, endpoint: &crate::config::Endpoint) -> crate::rclone::probe::ConnParams {
use crate::config::Endpoint;
match endpoint {
Endpoint::Sftp(sftp) => crate::rclone::probe::ConnParams::Sftp {
host: host.to_string(),
user: sftp.user.clone(),
pass: sftp.pass.clone(),
key_file: sftp.key_file.clone(),
port: sftp.port,
},
Endpoint::Smb(smb) => crate::rclone::probe::ConnParams::Smb {
host: host.to_string(),
user: smb.user.clone(),
pass: smb.pass.clone(),
domain: smb.domain.clone(),
port: smb.port,
share: smb.share.clone(),
},
}
}
const TEST_CONNECTION_TIMEOUT: Duration = Duration::from_secs(12);
async fn post_test_connection(
Json(body): Json<TestConnRequest>,
) -> Json<TestConnResponse> {
let params = req_to_params(&body.host, &body.endpoint);
match timeout(
TEST_CONNECTION_TIMEOUT,
tokio::task::spawn_blocking(move || crate::rclone::probe::test_connection(&params)),
)
.await
{
Ok(Ok(Ok(()))) => Json(TestConnResponse {
ok: true,
message: "Connected".to_string(),
}),
Ok(Ok(Err(e))) => Json(TestConnResponse {
ok: false,
message: e.to_string(),
}),
Ok(Err(e)) => Json(TestConnResponse {
ok: false,
message: format!("Internal error: {e}"),
}),
Err(_) => Json(TestConnResponse {
ok: false,
message: format!(
"Connection test timed out after {}s",
TEST_CONNECTION_TIMEOUT.as_secs()
),
}),
}
}
/// POST /api/browse — list subdirectories at a remote path.
///
/// Accepts a connection object (without `name`). The `path` field is optional.
#[derive(serde::Deserialize)]
struct BrowseRequest {
host: String,
#[serde(flatten)]
endpoint: crate::config::Endpoint,
#[serde(default = "default_browse_path")]
path: String,
}
fn default_browse_path() -> String {
"/".to_string()
}
#[derive(Serialize)]
struct BrowseResponse {
ok: bool,
#[serde(skip_serializing_if = "Option::is_none")]
dirs: Option<Vec<String>>,
#[serde(skip_serializing_if = "Option::is_none")]
error: Option<String>,
}
async fn post_browse(
Json(body): Json<BrowseRequest>,
) -> Json<BrowseResponse> {
let params = req_to_params(&body.host, &body.endpoint);
let path = body.path;
match tokio::task::spawn_blocking(move || crate::rclone::probe::browse_dirs(&params, &path)).await {
Ok(Ok(dirs)) => Json(BrowseResponse { ok: true, dirs: Some(dirs), error: None }),
Ok(Err(e)) => Json(BrowseResponse { ok: false, dirs: None, error: Some(e.to_string()) }),
Err(e) => Json(BrowseResponse { ok: false, dirs: None, error: Some(format!("Internal error: {e}")) }),
}
}
/// POST /api/reconnect/{share} — trigger reconnect for a single share.
async fn reconnect_share(
State(state): State<SharedState>,
Path(share_name): Path<String>,
) -> Json<serde_json::Value> {
// Validate share exists
{
let cfg = state.config.read().unwrap();
if cfg.find_share(&share_name).is_none() {
return Json(serde_json::json!({ "ok": false, "message": format!("Share '{}' not found", share_name) }));
}
}
match state.cmd_tx.send(crate::daemon::SupervisorCmd::Reconnect(share_name.clone())) {
Ok(()) => Json(serde_json::json!({ "ok": true, "message": format!("Reconnecting share '{}'", share_name) })),
Err(e) => Json(serde_json::json!({ "ok": false, "message": format!("Failed to send reconnect: {}", e) })),
}
}

View File

@ -11,8 +11,11 @@ use std::sync::mpsc;
use std::sync::{Arc, RwLock};
use std::thread;
use axum::http::header;
use axum::extract::Request;
use axum::http::{header, StatusCode};
use axum::middleware::{self, Next};
use axum::response::IntoResponse;
use axum::response::Response;
use axum::routing::get;
use axum::Router;
@ -29,6 +32,68 @@ async fn style_css() -> impl IntoResponse {
([(header::CONTENT_TYPE, "text/css")], STYLE_CSS)
}
/// HTTP Basic Auth middleware. Only active when `web.password` is set.
async fn basic_auth(
axum::extract::State(state): axum::extract::State<SharedState>,
request: Request,
next: Next,
) -> Response {
let password = {
let cfg = state.config.read().unwrap();
cfg.web.password.clone()
};
if password.is_empty() {
return next.run(request).await;
}
// Check Authorization header
let auth_header = request
.headers()
.get(header::AUTHORIZATION)
.and_then(|h| h.to_str().ok())
.unwrap_or("");
if auth_header.starts_with("Basic ") {
let encoded = &auth_header[6..];
if let Ok(decoded) = base64_decode(encoded) {
// Format: "warpgate:<password>" or just ":<password>"
let parts: Vec<&str> = decoded.splitn(2, ':').collect();
let provided_password = parts.get(1).copied().unwrap_or("");
if provided_password == password {
return next.run(request).await;
}
}
}
// Return 401 with WWW-Authenticate header
(
StatusCode::UNAUTHORIZED,
[(header::WWW_AUTHENTICATE, "Basic realm=\"Warpgate\"")],
"Unauthorized",
).into_response()
}
fn base64_decode(s: &str) -> Result<String, ()> {
let alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
let mut result = Vec::new();
let s = s.trim_end_matches('=');
let mut buf = 0u32;
let mut bits = 0u32;
for c in s.chars() {
if let Some(pos) = alphabet.find(c) {
buf = (buf << 6) | pos as u32;
bits += 6;
if bits >= 8 {
bits -= 8;
result.push((buf >> bits) as u8);
buf &= (1 << bits) - 1;
}
}
}
String::from_utf8(result).map_err(|_| ())
}
/// Build the axum router with all routes.
pub fn build_router(state: SharedState) -> Router {
Router::new()
@ -36,6 +101,7 @@ pub fn build_router(state: SharedState) -> Router {
.merge(pages::routes())
.merge(sse::routes())
.merge(api::routes())
.layer(middleware::from_fn_with_state(state.clone(), basic_auth))
.with_state(state)
}

View File

@ -241,6 +241,7 @@ struct LayoutTemplate {
tab_content: String,
uptime: String,
config_path: String,
nas_offline: bool,
}
#[derive(Template)]
@ -257,6 +258,7 @@ struct DashboardTabTemplate {
smbd_running: bool,
webdav_running: bool,
nfs_exported: bool,
all_synced: bool,
}
#[derive(Template)]
@ -306,6 +308,7 @@ struct StatusPartialTemplate {
smbd_running: bool,
webdav_running: bool,
nfs_exported: bool,
all_synced: bool,
}
// ─── Full-page handlers (layout shell + tab content) ──────────────────────
@ -354,6 +357,7 @@ fn render_layout(
tab_content,
uptime: status.uptime_string(),
config_path: state.config_path.display().to_string(),
nas_offline: status.nas_offline,
};
match tmpl.render() {
@ -407,6 +411,7 @@ fn render_dashboard_tab(status: &DaemonStatus, config: &Config) -> String {
let healthy_count = shares.iter().filter(|s| s.health == "OK").count();
let failed_count = shares.iter().filter(|s| s.health == "FAILED").count();
let (total_cache, total_speed, active_transfers) = aggregate_stats(&status.shares);
let all_synced = status.all_synced;
let tmpl = DashboardTabTemplate {
total_shares: shares.len(),
@ -419,6 +424,7 @@ fn render_dashboard_tab(status: &DaemonStatus, config: &Config) -> String {
smbd_running: status.smbd_running,
webdav_running: status.webdav_running,
nfs_exported: status.nfs_exported,
all_synced,
};
tmpl.render().unwrap_or_default()
@ -616,6 +622,7 @@ async fn status_partial(State(state): State<SharedState>) -> Response {
smbd_running: status.smbd_running,
webdav_running: status.webdav_running,
nfs_exported: status.nfs_exported,
all_synced: status.all_synced,
};
match tmpl.render() {

View File

@ -119,6 +119,10 @@ fn render_sse_payload(
webdav_running: status.webdav_running,
};
let sync_status = SyncStatusPartial {
all_synced: status.all_synced,
};
let mut html = String::new();
// Primary target: dashboard stats
if let Ok(s) = stats.render() {
@ -132,6 +136,10 @@ fn render_sse_payload(
if let Ok(s) = badges.render() {
html.push_str(&s);
}
// OOB: sync status indicator
if let Ok(s) = sync_status.render() {
html.push_str(&s);
}
html
}
@ -205,3 +213,9 @@ struct ProtocolBadgesPartial {
nfs_exported: bool,
webdav_running: bool,
}
#[derive(Template)]
#[template(path = "web/partials/sync_status.html")]
struct SyncStatusPartial {
all_synced: bool,
}

View File

@ -456,6 +456,125 @@ textarea:focus {
color: var(--accent);
}
/* ─── Connection test button ──────────────────────────────── */
.item-header-actions {
display: flex;
align-items: center;
gap: 8px;
}
.test-btn {
background: none;
border: 1px solid rgba(108,138,255,0.4);
color: var(--accent);
padding: 4px 10px;
border-radius: 4px;
cursor: pointer;
font-size: 0.8em;
}
.test-btn:hover:not(:disabled) { background: rgba(108,138,255,0.1); }
.test-btn:disabled { opacity: 0.5; cursor: default; }
.test-ok {
font-size: 0.8em;
color: var(--green);
white-space: nowrap;
}
.test-fail {
font-size: 0.8em;
color: var(--red);
max-width: 200px;
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
/* ─── Remote Path browse combo ────────────────────────────── */
.browse-combo {
display: flex;
gap: 6px;
align-items: center;
width: 100%;
}
.browse-combo input {
flex: 1;
}
.browse-btn {
background: none;
border: 1px solid var(--border);
color: var(--text-muted);
padding: 6px 12px;
border-radius: 4px;
cursor: pointer;
font-size: 0.85em;
white-space: nowrap;
flex-shrink: 0;
}
.browse-btn:hover:not(:disabled) {
border-color: var(--accent);
color: var(--accent);
}
.browse-btn:disabled { opacity: 0.5; cursor: default; }
.dir-dropdown {
margin-top: 4px;
background: var(--surface);
border: 1px solid var(--border);
border-radius: 6px;
overflow: hidden;
max-height: 200px;
overflow-y: auto;
}
.dir-item {
display: flex;
align-items: center;
justify-content: space-between;
padding: 6px 10px;
border-bottom: 1px solid var(--border);
gap: 8px;
}
.dir-item:last-child { border-bottom: none; }
.dir-item:hover { background: rgba(108,138,255,0.06); }
.dir-name {
font-family: var(--mono);
font-size: 0.85em;
cursor: pointer;
color: var(--text);
flex: 1;
}
.dir-name:hover { color: var(--accent); }
.dir-enter {
background: none;
border: none;
color: var(--text-muted);
cursor: pointer;
font-size: 0.9em;
padding: 2px 6px;
border-radius: 3px;
flex-shrink: 0;
}
.dir-enter:hover { color: var(--accent); background: rgba(108,138,255,0.1); }
.browse-error {
margin-top: 4px;
font-size: 0.82em;
color: var(--red);
}
/* Toggle switch */
.toggle {
position: relative;
@ -599,6 +718,184 @@ textarea:focus {
.toggle-sm .slider::after { width: 12px; height: 12px; }
.toggle-sm input:checked + .slider::after { transform: translateX(12px); }
/* ─── Offline banner ───────────────────────────────────── */
.offline-banner {
background: #f59e0b;
color: #1c1917;
padding: 10px 20px;
display: flex;
align-items: center;
gap: 10px;
font-size: 0.9rem;
font-weight: 500;
border-bottom: 2px solid #d97706;
position: sticky;
top: 0;
z-index: 100;
}
.offline-banner .offline-icon { font-size: 1.1rem; }
.offline-banner .offline-sub { margin-left: auto; opacity: 0.7; font-size: 0.8rem; }
/* ─── Sync indicator ──────────────────────────────────── */
.sync-indicator {
display: flex;
align-items: center;
gap: 10px;
padding: 12px 16px;
border-radius: 8px;
margin-bottom: 16px;
font-size: 0.9rem;
font-weight: 500;
}
.sync-ok {
background: rgba(34, 197, 94, 0.12);
color: #16a34a;
border: 1px solid rgba(34, 197, 94, 0.3);
}
.sync-pending {
background: rgba(245, 158, 11, 0.12);
color: #d97706;
border: 1px solid rgba(245, 158, 11, 0.3);
}
.sync-indicator .sync-icon { font-size: 1.1rem; }
.sync-indicator .sync-sub { margin-left: auto; opacity: 0.65; font-size: 0.8rem; }
/* ─── Preset section (config tab) ─────────────────────── */
.preset-section {
margin-bottom: 20px;
padding: 16px;
background: var(--surface, #1e1e2e);
border-radius: 10px;
border: 1px solid var(--border, rgba(255,255,255,0.08));
}
.preset-header {
display: flex;
align-items: center;
gap: 12px;
margin-bottom: 12px;
}
.preset-hint { font-size: 0.78rem; opacity: 0.6; }
.preset-buttons {
display: flex;
gap: 10px;
flex-wrap: wrap;
}
.preset-btn {
display: flex;
flex-direction: column;
align-items: flex-start;
gap: 2px;
padding: 10px 14px;
border-radius: 8px;
border: 1px solid var(--border, rgba(255,255,255,0.12));
background: var(--surface2, rgba(255,255,255,0.04));
cursor: pointer;
color: inherit;
min-width: 140px;
transition: background 0.15s, border-color 0.15s;
}
.preset-btn:hover { background: rgba(99,102,241,0.15); border-color: rgba(99,102,241,0.4); }
.preset-btn .preset-icon { font-size: 1.2rem; }
.preset-btn .preset-name { font-weight: 600; font-size: 0.9rem; }
.preset-btn .preset-desc { font-size: 0.72rem; opacity: 0.65; }
.preset-result { margin-top: 10px; min-height: 20px; font-size: 0.85rem; }
.preset-result .ok { color: #22c55e; }
.preset-result .error { color: #ef4444; }
.preset-spinner { display: none; font-size: 0.85rem; opacity: 0.7; }
.htmx-request .preset-spinner { display: block; }
/* ─── Share error banner & action buttons ─────────────── */
.share-error-banner {
display: flex;
align-items: center;
gap: 8px;
padding: 8px 12px;
background: rgba(239, 68, 68, 0.1);
border: 1px solid rgba(239, 68, 68, 0.3);
border-radius: 6px;
margin-top: 8px;
font-size: 0.85rem;
color: #f87171;
}
.action-btn {
padding: 6px 14px;
border-radius: 6px;
border: 1px solid var(--border, rgba(255,255,255,0.12));
background: var(--surface2, rgba(255,255,255,0.06));
cursor: pointer;
color: inherit;
font-size: 0.85rem;
transition: background 0.15s;
}
.action-btn:hover { background: rgba(99,102,241,0.2); }
.action-btn-sm {
padding: 3px 10px;
border-radius: 4px;
border: 1px solid rgba(239,68,68,0.4);
background: rgba(239,68,68,0.1);
cursor: pointer;
color: #f87171;
font-size: 0.78rem;
margin-left: auto;
}
/* ─── Apply modal ─────────────────────────────────────── */
.modal-overlay {
position: fixed; inset: 0;
background: rgba(0,0,0,0.6);
display: flex; align-items: center; justify-content: center;
z-index: 1000;
}
.modal-card {
background: var(--surface);
border: 1px solid var(--border);
border-radius: 12px;
padding: 28px 32px;
min-width: 380px;
max-width: 460px;
}
.modal-title {
font-size: 1.1em;
margin-bottom: 20px;
}
.modal-steps { display: flex; flex-direction: column; gap: 14px; }
.modal-step {
display: flex; align-items: center; gap: 12px;
font-size: 0.92em;
color: var(--text-muted);
transition: color 0.2s;
}
.modal-step.step-done { color: var(--green); }
.modal-step.step-active { color: var(--text); }
.modal-step.step-error { color: var(--red); }
.step-icon { width: 20px; height: 20px; display: flex; align-items: center; justify-content: center; flex-shrink: 0; }
.step-dot { width: 8px; height: 8px; border-radius: 50%; background: var(--border); }
.step-spinner {
width: 16px; height: 16px;
border: 2px solid var(--border);
border-top-color: var(--accent);
border-radius: 50%;
animation: spin 0.8s linear infinite;
}
@keyframes spin { to { transform: rotate(360deg); } }
.modal-error {
margin-top: 14px;
padding: 10px 14px;
background: rgba(248,113,113,0.1);
border: 1px solid var(--red);
border-radius: 6px;
color: var(--red);
font-size: 0.85em;
}
.modal-footer { margin-top: 20px; text-align: right; }
/* ─── Responsive ───────────────────────────────────────── */
@media (max-width: 768px) {

View File

@ -2,33 +2,48 @@
# See: https://github.com/user/warpgate for documentation
# --- NAS Connections ---
# Each connection defines an SFTP endpoint to a remote NAS.
# Each connection defines an endpoint to a remote NAS.
# Supported protocols: sftp, smb
# The "name" is used as the rclone remote identifier and must be unique.
[[connections]]
# Unique name for this connection (alphanumeric, hyphens, underscores)
name = "nas"
# Remote NAS Tailscale IP or hostname
nas_host = "100.x.x.x"
# SFTP username
nas_user = "admin"
# SFTP password (prefer key_file for security)
# nas_pass = "your-password"
# Path to SSH private key (recommended)
# nas_key_file = "/root/.ssh/id_ed25519"
# SFTP port
sftp_port = 22
host = "100.x.x.x"
# Protocol: "sftp" or "smb"
protocol = "sftp"
# Username
user = "admin"
# Password (prefer key_file for SFTP)
# pass = "your-password"
# Path to SSH private key (SFTP only, recommended)
# key_file = "/root/.ssh/id_ed25519"
# Port (SFTP default: 22, SMB default: 445)
port = 22
# SFTP connection pool size (if multi_thread_streams=4, recommend >= 16)
sftp_connections = 8
connections = 8
# --- Additional NAS (uncomment to add) ---
# --- Additional NAS via SFTP (uncomment to add) ---
# [[connections]]
# name = "office"
# nas_host = "192.168.1.100"
# nas_user = "photographer"
# nas_pass = "secret"
# sftp_port = 22
# sftp_connections = 8
# host = "192.168.1.100"
# protocol = "sftp"
# user = "photographer"
# pass = "secret"
# port = 22
# connections = 8
# --- SMB connection example (uncomment to add) ---
# [[connections]]
# name = "smb-nas"
# host = "192.168.1.200"
# protocol = "smb"
# user = "admin"
# pass = "password" # Required for SMB
# share = "photos" # Windows share name
# # domain = "WORKGROUP" # Optional domain
# # port = 445 # Default: 445
[cache]
# Cache storage directory (should be on SSD, prefer btrfs/ZFS filesystem)
@ -50,7 +65,7 @@ read_ahead = "512M"
# In-memory buffer size
buffer_size = "256M"
# Number of parallel SFTP streams for single-file downloads (improves cold-read speed)
# If using multi_thread_streams=4, set sftp_connections >= 16 for multi-file concurrency
# If using multi_thread_streams=4, set connections >= 16 for multi-file concurrency
multi_thread_streams = 4
# Minimum file size to trigger multi-thread download
multi_thread_cutoff = "50M"
@ -98,11 +113,16 @@ webdav_port = 8080
# Each share maps a remote NAS path to a local mount point.
# Each gets its own rclone mount process with independent FUSE mount.
# The "connection" field references a [[connections]] entry by name.
#
# remote_path semantics differ by protocol:
# SFTP: absolute path on the NAS, e.g. "/volume1/photos"
# SMB: path relative to the share defined in the connection, e.g. "/" or "/subfolder"
# (the SMB share name itself is set in [[connections]])
[[shares]]
name = "photos"
connection = "nas"
remote_path = "/volume1/photos"
remote_path = "/volume1/photos" # SFTP absolute path; for SMB use "/" or "/subfolder"
mount_point = "/mnt/photos"
# [[shares]]

View File

@ -40,6 +40,13 @@
x-init="startTimer()"
x-effect="localStorage.setItem('wg_auto_refresh', autoRefresh); localStorage.setItem('wg_refresh_interval', refreshInterval); startTimer()"
>
{% if nas_offline %}
<div class="offline-banner" role="alert">
<span class="offline-icon"></span>
<strong>NAS 离线</strong> — 正在使用本地缓存(写入已排队)
<span class="offline-sub">Offline mode: using local cache, writes are queued</span>
</div>
{% endif %}
<div class="shell">
<div class="header">
<div>

View File

@ -1,7 +1,7 @@
<div id="share-rows" hx-swap-oob="innerHTML:#share-rows">
<div class="cards">
{% for share in shares %}
<div class="card" style="cursor:pointer"
<div class="card" style="cursor:pointer" data-share-health="{{ share.health }}"
hx-get="/tabs/shares?expand={{ share.name }}" hx-target="#tab-content" hx-swap="innerHTML"
@click="activeTab = 'shares'">
<div class="card-header">

View File

@ -0,0 +1,13 @@
{% if all_synced %}
<div class="sync-indicator sync-ok" id="sync-status" hx-swap-oob="outerHTML:#sync-status">
<span class="sync-icon"></span>
<span class="sync-text">已全部同步 — 可以断网</span>
<span class="sync-sub">All synced — safe to disconnect</span>
</div>
{% else %}
<div class="sync-indicator sync-pending" id="sync-status" hx-swap-oob="outerHTML:#sync-status">
<span class="sync-icon"></span>
<span class="sync-text">同步进行中 — 请勿断网</span>
<span class="sync-sub">Sync in progress — do not disconnect</span>
</div>
{% endif %}

View File

@ -1,12 +1,47 @@
<script id="config-init" type="application/json">{{ init_json }}</script>
<script>
function configEditorFn() {
// Read config synchronously so x-for renders on the first pass.
// If init() sets config *after* Alpine's first scan, x-for elements
// created in the re-render may miss their event-listener binding.
const _initData = JSON.parse(document.getElementById('config-init').textContent);
function _prepareForEdit(config) {
for (const conn of config.connections) {
// Ensure protocol field exists (default sftp)
if (!conn.protocol) conn.protocol = 'sftp';
// Ensure all optional fields exist for Alpine.js binding
if (conn.pass == null) conn.pass = '';
if (conn.key_file == null) conn.key_file = '';
if (conn.domain == null) conn.domain = '';
if (conn.share == null) conn.share = '';
// Ensure numeric fields have defaults
if (conn.port == null) conn.port = conn.protocol === 'smb' ? 445 : 22;
if (conn.connections == null) conn.connections = 8;
}
if (config.smb_auth.username == null) config.smb_auth.username = '';
if (config.smb_auth.smb_pass == null) config.smb_auth.smb_pass = '';
if (config.warmup.warmup_schedule == null) config.warmup.warmup_schedule = '';
for (const rule of config.warmup.rules) {
if (rule.newer_than == null) rule.newer_than = '';
}
for (const share of config.shares) {
if (share.dir_refresh_interval == null) share.dir_refresh_interval = '';
}
return config;
}
const _config = _prepareForEdit(_initData.config);
return {
config: {},
originalConfig: {},
config: _config,
originalConfig: JSON.parse(JSON.stringify(_config)),
submitting: false,
message: null,
isError: false,
message: _initData.message || null,
isError: _initData.is_error || false,
applyModal: { open: false, steps: [], error: null, done: false },
connTest: {},
browseState: {},
sections: {
connections: true,
shares: true,
@ -19,41 +54,36 @@ function configEditorFn() {
smb_auth: false,
warmup: false,
dir_refresh: false,
web: false,
notifications: false,
log: false,
},
init() {
const data = JSON.parse(document.getElementById('config-init').textContent);
this.config = this.prepareForEdit(data.config);
this.originalConfig = JSON.parse(JSON.stringify(this.config));
if (data.message) {
this.message = data.message;
this.isError = data.is_error;
}
// config is already set; nothing to do here.
},
/** Convert null optional fields to empty strings for form binding. */
prepareForEdit(config) {
for (const conn of config.connections) {
if (conn.nas_pass == null) conn.nas_pass = '';
if (conn.nas_key_file == null) conn.nas_key_file = '';
}
if (config.smb_auth.username == null) config.smb_auth.username = '';
if (config.smb_auth.smb_pass == null) config.smb_auth.smb_pass = '';
for (const rule of config.warmup.rules) {
if (rule.newer_than == null) rule.newer_than = '';
}
for (const share of config.shares) {
if (share.dir_refresh_interval == null) share.dir_refresh_interval = '';
}
return config;
return _prepareForEdit(config);
},
/** Convert empty optional strings back to null for the API. */
prepareForSubmit(config) {
const c = JSON.parse(JSON.stringify(config));
for (const conn of c.connections) {
if (!conn.nas_pass) conn.nas_pass = null;
if (!conn.nas_key_file) conn.nas_key_file = null;
if (!conn.pass) conn.pass = null;
if (conn.protocol === 'sftp') {
// SFTP: keep key_file, connections; remove SMB-only fields
if (!conn.key_file) conn.key_file = null;
delete conn.domain;
delete conn.share;
} else {
// SMB: keep domain, share; remove SFTP-only fields
if (!conn.domain) conn.domain = null;
delete conn.key_file;
delete conn.connections;
}
}
if (!c.smb_auth.username) c.smb_auth.username = null;
if (!c.smb_auth.smb_pass) c.smb_auth.smb_pass = null;
@ -68,9 +98,10 @@ function configEditorFn() {
addConnection() {
this.config.connections.push({
name: '', nas_host: '', nas_user: '',
nas_pass: '', nas_key_file: '',
sftp_port: 22, sftp_connections: 8
name: '', host: '', protocol: 'sftp',
user: '', pass: '', key_file: '',
port: 22, connections: 8,
domain: '', share: ''
});
},
@ -93,9 +124,90 @@ function configEditorFn() {
});
},
async testConn(conn, i) {
if (this.connTest[i] && this.connTest[i].loading) return;
this.connTest = { ...this.connTest, [i]: { loading: true, ok: null, message: '' } };
try {
const payload = this._connPayload(conn);
const resp = await fetch('/api/test-connection', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(payload)
});
const result = await resp.json();
this.connTest = { ...this.connTest, [i]: { loading: false, ok: result.ok, message: result.message } };
} catch (e) {
this.connTest = { ...this.connTest, [i]: { loading: false, ok: false, message: 'Network error: ' + e.message } };
}
},
/** Build a connection payload for test/browse API (name not required). */
_connPayload(conn) {
const base = {
host: conn.host,
protocol: conn.protocol || 'sftp',
user: conn.user,
pass: conn.pass || null,
port: conn.port,
};
if (base.protocol === 'sftp') {
base.key_file = conn.key_file || null;
base.connections = conn.connections || 8;
} else {
base.domain = conn.domain || null;
base.share = conn.share || '';
}
return base;
},
async browseDir(share, i) {
const path = share.remote_path || '/';
this.browseState = { ...this.browseState, [i]: { dirs: [], loading: true, error: '', path } };
const conn = this.config.connections.find(c => c.name === share.connection);
if (!conn) {
this.browseState = { ...this.browseState, [i]: { dirs: [], loading: false, error: 'Connection not found', path } };
return;
}
try {
const payload = { ...this._connPayload(conn), path };
const resp = await fetch('/api/browse', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(payload)
});
const result = await resp.json();
if (result.ok) {
this.browseState = { ...this.browseState, [i]: { dirs: result.dirs, loading: false, error: '', path } };
} else {
this.browseState = { ...this.browseState, [i]: { dirs: [], loading: false, error: result.error || 'Error', path } };
}
} catch (e) {
this.browseState = { ...this.browseState, [i]: { dirs: [], loading: false, error: 'Network error: ' + e.message, path } };
}
},
async browseIntoDir(share, i, subdir) {
const base = (this.browseState[i]?.path || '/').replace(/\/+$/, '');
const newPath = base + '/' + subdir;
share.remote_path = newPath;
await this.browseDir({ ...share, remote_path: newPath }, i);
},
async submitConfig() {
this.submitting = true;
this.message = null;
this.applyModal = {
open: true,
error: null,
done: false,
steps: [
{ label: 'Validating configuration', status: 'active' },
{ label: 'Writing config file', status: 'pending' },
{ label: 'Sending reload command', status: 'pending' },
{ label: 'Restarting services', status: 'pending' },
]
};
try {
const payload = this.prepareForSubmit(this.config);
const resp = await fetch('/config/apply', {
@ -104,18 +216,64 @@ function configEditorFn() {
body: JSON.stringify(payload)
});
const result = await resp.json();
this.message = result.message;
this.isError = !result.ok;
if (result.ok) {
this.originalConfig = JSON.parse(JSON.stringify(this.config));
if (!result.ok) {
this.applyModal.steps[0].status = 'error';
this.applyModal.error = result.message;
this.submitting = false;
return;
}
// Steps 1-3 all completed (single API call)
this.applyModal.steps[0].status = 'done';
this.applyModal.steps[1].status = 'done';
this.applyModal.steps[2].status = 'done';
this.applyModal.steps[3].status = 'active';
this.message = result.message;
this.isError = false;
this.originalConfig = JSON.parse(JSON.stringify(this.config));
// Watch SSE for service readiness
this._waitForServicesReady();
} catch (e) {
this.message = 'Network error: ' + e.message;
this.isError = true;
this.applyModal.steps[0].status = 'error';
this.applyModal.error = 'Network error: ' + e.message;
}
this.submitting = false;
},
_waitForServicesReady() {
const checkInterval = setInterval(() => {
const shareRows = document.querySelectorAll('[data-share-health]');
if (shareRows.length === 0) {
clearInterval(checkInterval);
this._markServicesDone();
return;
}
const allSettled = Array.from(shareRows).every(el => {
const h = el.dataset.shareHealth;
return h && h !== 'PENDING' && h !== 'PROBING';
});
if (allSettled) {
clearInterval(checkInterval);
this._markServicesDone();
}
}, 500);
// Safety timeout: 30s max wait
setTimeout(() => {
clearInterval(checkInterval);
if (this.applyModal.steps[3].status === 'active') {
this._markServicesDone();
}
}, 30000);
},
_markServicesDone() {
this.applyModal.steps[3].status = 'done';
this.applyModal.done = true;
},
resetConfig() {
this.config = JSON.parse(JSON.stringify(this.originalConfig));
this.message = null;
@ -135,7 +293,46 @@ if (window.Alpine) {
}
</script>
<div x-data="configEditor">
<div x-data="configEditorFn()">
<!-- Preset buttons -->
<div class="preset-section">
<div class="preset-header">
<span class="section-label">快速预设 / Quick Presets</span>
<span class="preset-hint">一键应用最佳实践配置,不影响 NAS 连接和 shares 设置</span>
</div>
<div class="preset-buttons">
<button class="preset-btn preset-photographer"
hx-post="/api/preset/photographer"
hx-target="#preset-result"
hx-swap="innerHTML"
hx-indicator="#preset-spinner">
<span class="preset-icon">📷</span>
<span class="preset-name">摄影师</span>
<span class="preset-desc">RAW 大文件256M 分块读取</span>
</button>
<button class="preset-btn preset-video"
hx-post="/api/preset/video"
hx-target="#preset-result"
hx-swap="innerHTML"
hx-indicator="#preset-spinner">
<span class="preset-icon">🎬</span>
<span class="preset-name">视频剪辑</span>
<span class="preset-desc">顺序读取优化1G 预读缓冲</span>
</button>
<button class="preset-btn preset-office"
hx-post="/api/preset/office"
hx-target="#preset-result"
hx-swap="innerHTML"
hx-indicator="#preset-spinner">
<span class="preset-icon">💼</span>
<span class="preset-name">文档办公</span>
<span class="preset-desc">小文件响应30m 目录缓存</span>
</button>
</div>
<div id="preset-result" class="preset-result"></div>
<div id="preset-spinner" class="htmx-indicator preset-spinner">应用中...</div>
</div>
<!-- Message banner -->
<template x-if="message">
@ -153,7 +350,15 @@ if (window.Alpine) {
<div class="array-item">
<div class="item-header">
<strong x-text="conn.name || 'New Connection'"></strong>
<button type="button" @click="config.connections.splice(i, 1)" class="remove-btn">Remove</button>
<div class="item-header-actions">
<button type="button" @click="testConn(conn, i)" class="test-btn">
<span x-show="!(connTest[i] && connTest[i].loading)">Test</span>
<span x-show="connTest[i] && connTest[i].loading" style="display:none">Testing…</span>
</button>
<span x-show="connTest[i] && connTest[i].ok === true" class="test-ok" style="display:none">✓ Connected</span>
<span x-show="connTest[i] && connTest[i].ok === false" class="test-fail" style="display:none" x-text="connTest[i] ? connTest[i].message : ''"></span>
<button type="button" @click="config.connections.splice(i, 1)" class="remove-btn">Remove</button>
</div>
</div>
<div class="field-grid">
<div class="field-row">
@ -161,28 +366,46 @@ if (window.Alpine) {
<input type="text" x-model="conn.name" required placeholder="e.g. home-nas">
</div>
<div class="field-row">
<label>NAS Host *</label>
<input type="text" x-model="conn.nas_host" required placeholder="e.g. 100.64.0.1">
<label>Protocol *</label>
<select x-model="conn.protocol" @change="conn.port = conn.protocol === 'smb' ? 445 : 22">
<option value="sftp">SFTP</option>
<option value="smb">SMB</option>
</select>
</div>
<div class="field-row">
<label>Host *</label>
<input type="text" x-model="conn.host" required placeholder="e.g. 100.64.0.1">
</div>
<div class="field-row">
<label>Username *</label>
<input type="text" x-model="conn.nas_user" required placeholder="e.g. admin">
<input type="text" x-model="conn.user" required placeholder="e.g. admin">
</div>
<div class="field-row">
<label>Password</label>
<input type="password" x-model="conn.nas_pass" placeholder="(optional if using key)">
<input type="password" x-model="conn.pass"
:placeholder="conn.protocol === 'smb' ? 'Required for SMB' : '(optional if using key)'">
</div>
<div class="field-row">
<label>Port</label>
<input type="number" x-model.number="conn.port" min="1" max="65535">
</div>
<!-- SFTP-only fields -->
<div class="field-row" x-show="conn.protocol === 'sftp'" x-transition>
<label>SSH Key File</label>
<input type="text" x-model="conn.nas_key_file" class="mono" placeholder="/root/.ssh/id_rsa">
<input type="text" x-model="conn.key_file" class="mono" placeholder="/root/.ssh/id_rsa">
</div>
<div class="field-row">
<label>SFTP Port</label>
<input type="number" x-model.number="conn.sftp_port" min="1" max="65535">
</div>
<div class="field-row">
<div class="field-row" x-show="conn.protocol === 'sftp'" x-transition>
<label>SFTP Connections</label>
<input type="number" x-model.number="conn.sftp_connections" min="1" max="128">
<input type="number" x-model.number="conn.connections" min="1" max="128">
</div>
<!-- SMB-only fields -->
<div class="field-row" x-show="conn.protocol === 'smb'" x-transition>
<label>Share Name *</label>
<input type="text" x-model="conn.share" required placeholder="e.g. photos">
</div>
<div class="field-row" x-show="conn.protocol === 'smb'" x-transition>
<label>Domain</label>
<input type="text" x-model="conn.domain" placeholder="e.g. WORKGROUP (optional)">
</div>
</div>
</div>
@ -219,7 +442,28 @@ if (window.Alpine) {
</div>
<div class="field-row">
<label>Remote Path *</label>
<input type="text" x-model="share.remote_path" class="mono" required placeholder="/volume1/photos">
<div class="browse-combo">
<input type="text" x-model="share.remote_path" class="mono" required placeholder="/volume1/photos"
@change="browseState = { ...browseState, [i]: null }">
<button type="button" class="browse-btn"
:disabled="(browseState[i] && browseState[i].loading) || !share.connection"
@click="browseDir(share, i)">
<span x-show="!(browseState[i] && browseState[i].loading)">Browse</span>
<span x-show="browseState[i] && browseState[i].loading" style="display:none">Loading…</span>
</button>
</div>
<div x-show="browseState[i] && browseState[i].dirs && browseState[i].dirs.length > 0" class="dir-dropdown">
<template x-for="d in (browseState[i] && browseState[i].dirs || [])" :key="d">
<div class="dir-item">
<span class="dir-name"
@click="share.remote_path = browseState[i].path.replace(/\/+$/, '') + '/' + d; browseState = { ...browseState, [i]: { ...browseState[i], dirs: [] } }"
x-text="d"></span>
<button type="button" class="dir-enter" title="Enter directory"
@click="browseIntoDir(share, i, d)">→</button>
</div>
</template>
</div>
<div x-show="browseState[i] && browseState[i].error" class="browse-error" x-text="browseState[i] ? browseState[i].error : ''"></div>
</div>
<div class="field-row">
<label>Mount Point *</label>
@ -450,6 +694,13 @@ if (window.Alpine) {
Auto-warmup on Startup
</label>
</div>
<div class="field-row" style="margin-top:12px">
<label>Schedule (cron)</label>
<input type="text" x-model="config.warmup.warmup_schedule" placeholder='empty = disabled, e.g. "0 2 * * *" = daily 2am' style="max-width:360px">
</div>
<p style="font-size:0.82em;color:var(--text-muted);margin-top:4px;margin-bottom:8px">
Standard 5-field cron expression. When set, warmup rules run periodically in addition to startup.
</p>
<div style="margin-top:16px">
<label style="font-size:0.85em;color:var(--text-muted);display:block;margin-bottom:8px">Warmup Rules</label>
<template x-for="(rule, i) in config.warmup.rules" :key="i">
@ -519,6 +770,85 @@ if (window.Alpine) {
</div>
</section>
<!-- ═══ Section: Web ═══ -->
<section class="config-section">
<div class="section-header" @click="sections.web = !sections.web">
<h3>Web UI <span class="tier-badge tier-none">No restart</span></h3>
<span class="chevron" x-text="sections.web ? '▾' : '▸'"></span>
</div>
<div class="section-body" x-show="sections.web" x-transition>
<div class="field-row">
<label>Password</label>
<input type="password" x-model="config.web.password" placeholder="Leave empty to disable authentication" style="max-width:320px">
</div>
<p style="font-size:0.82em;color:var(--text-muted);margin-top:8px">
Protects the Web UI with HTTP Basic Auth. Leave empty to allow unauthenticated access.
</p>
</div>
</section>
<!-- ═══ Section: Notifications ═══ -->
<section class="config-section">
<div class="section-header" @click="sections.notifications = !sections.notifications">
<h3>Notifications <span class="tier-badge tier-none">No restart</span></h3>
<span class="chevron" x-text="sections.notifications ? '▾' : '▸'"></span>
</div>
<div class="section-body" x-show="sections.notifications" x-transition>
<div class="field-grid">
<div class="field-row">
<label>Webhook URL</label>
<input type="text" x-model="config.notifications.webhook_url" placeholder="https://... (Telegram/Bark/DingTalk)">
</div>
<div class="field-row">
<label>Cache Threshold %</label>
<input type="number" x-model.number="config.notifications.cache_threshold_pct" min="1" max="100" style="max-width:120px">
</div>
<div class="field-row">
<label>NAS Offline Minutes</label>
<input type="number" x-model.number="config.notifications.nas_offline_minutes" min="1" style="max-width:120px">
</div>
<div class="field-row">
<label>Write-back Depth</label>
<input type="number" x-model.number="config.notifications.writeback_depth" min="1" style="max-width:120px">
</div>
</div>
<p style="font-size:0.82em;color:var(--text-muted);margin-top:8px">
Send push notifications when cache is near full, NAS goes offline, or write-back queue grows large.
Leave Webhook URL empty to disable all notifications.
</p>
</div>
</section>
<!-- ═══ Section: Log ═══ -->
<section class="config-section">
<div class="section-header" @click="sections.log = !sections.log">
<h3>Log <span class="tier-badge tier-global">Full restart</span></h3>
<span class="chevron" x-text="sections.log ? '▾' : '▸'"></span>
</div>
<div class="section-body" x-show="sections.log" x-transition>
<div class="field-grid">
<div class="field-row">
<label>Log File</label>
<input type="text" x-model="config.log.file" class="mono" placeholder="/var/log/warpgate/warpgate.log (empty = no file logging)">
</div>
<div class="field-row">
<label>Log Level</label>
<select x-model="config.log.level">
<option value="error">error</option>
<option value="warn">warn</option>
<option value="info">info</option>
<option value="debug">debug</option>
<option value="trace">trace</option>
</select>
</div>
</div>
<p style="font-size:0.82em;color:var(--text-muted);margin-top:8px">
Changes to log settings require a full service restart to take effect.
Leave Log File empty to disable file logging (stdout only).
</p>
</div>
</section>
<!-- ═══ Form Actions ═══ -->
<div class="form-actions" style="margin-top:24px">
<button type="button" @click="submitConfig()" class="btn btn-primary" :disabled="submitting">
@ -528,4 +858,45 @@ if (window.Alpine) {
<button type="button" @click="resetConfig()" class="btn btn-secondary">Reset</button>
</div>
<!-- Apply Config Progress Modal -->
<div class="modal-overlay" x-show="applyModal.open" x-transition.opacity x-cloak
@keydown.escape.window="if (applyModal.done || applyModal.error) applyModal.open = false">
<div class="modal-card" @click.stop>
<h3 class="modal-title">Applying Configuration</h3>
<div class="modal-steps">
<template x-for="(step, i) in applyModal.steps" :key="i">
<div class="modal-step" :class="'step-' + step.status">
<span class="step-icon">
<template x-if="step.status === 'done'">
<svg width="16" height="16" viewBox="0 0 16 16" fill="none">
<path d="M3 8.5L6.5 12L13 4" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
</svg>
</template>
<template x-if="step.status === 'active'">
<span class="step-spinner"></span>
</template>
<template x-if="step.status === 'error'">
<svg width="16" height="16" viewBox="0 0 16 16" fill="none">
<path d="M4 4L12 12M12 4L4 12" stroke="currentColor" stroke-width="2" stroke-linecap="round"/>
</svg>
</template>
<template x-if="step.status === 'pending'">
<span class="step-dot"></span>
</template>
</span>
<span class="step-label" x-text="step.label"></span>
</div>
</template>
</div>
<div x-show="applyModal.error" class="modal-error" x-text="applyModal.error"></div>
<div class="modal-footer">
<button class="btn btn-primary"
x-show="applyModal.done || applyModal.error"
@click="applyModal.open = false">
Close
</button>
</div>
</div>
</div>
</div>

View File

@ -19,6 +19,20 @@
</div>
</div>
{% if all_synced %}
<div class="sync-indicator sync-ok" id="sync-status">
<span class="sync-icon"></span>
<span class="sync-text">已全部同步 — 可以断网</span>
<span class="sync-sub">All synced — safe to disconnect</span>
</div>
{% else %}
<div class="sync-indicator sync-pending" id="sync-status">
<span class="sync-icon"></span>
<span class="sync-text">同步进行中 — 请勿断网</span>
<span class="sync-sub">Sync in progress — do not disconnect</span>
</div>
{% endif %}
<div id="share-rows">
<div class="cards">
{% for share in shares %}

View File

@ -70,6 +70,27 @@
<div class="value">{{ share.transfers }}</div>
</div>
</div>
{% if share.health == "FAILED" %}
<div class="share-error-banner">
<span class="error-icon"></span>
<span class="error-msg">{{ share.health_message }}</span>
<button class="action-btn-sm"
hx-post="/api/reconnect/{{ share.name }}"
hx-target="closest .share-error-banner"
hx-swap="outerHTML">
重试
</button>
</div>
{% endif %}
<div style="margin-bottom:12px">
<button class="action-btn"
hx-post="/api/reconnect/{{ share.name }}"
hx-confirm="重新连接 {{ share.name }}"
hx-target="this"
hx-swap="outerHTML">
重新连接
</button>
</div>
<table class="info-table">
<tr><td>Health</td><td>{{ share.health }}</td></tr>
{% if share.health == "FAILED" %}

144
tests/09-cli/test-preset-cli.sh Executable file
View File

@ -0,0 +1,144 @@
#!/usr/bin/env bash
# Test: `warpgate preset <name>` applies correct values to config file.
#
# Verifies that each preset writes the expected cache.max_size to the config,
# that CLI and API presets are unified (same source of truth), and that the
# command exits 0 for valid presets and non-zero for unknown ones.
#
# Does NOT require a running warpgate daemon — only needs a config file.
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "$SCRIPT_DIR/../harness/helpers.sh"
setup_test_env
trap teardown_test_env EXIT
# Generate a minimal config pointing at a fake NAS (we don't connect to it)
gen_config "nas_host=127.0.0.1"
# ── photographer preset ──────────────────────────────────────────────────────
output=$("$WARPGATE_BIN" preset photographer -c "$TEST_CONFIG" 2>&1) || {
echo "FAIL: 'warpgate preset photographer' exited non-zero"
echo " output: $output"
exit 1
}
assert_output_contains "$output" "photographer"
# Verify cache.max_size was written as 500G
if ! grep -q 'max_size = "500G"' "$TEST_CONFIG"; then
echo "FAIL: photographer preset did not write cache.max_size = \"500G\""
echo " config: $(grep max_size "$TEST_CONFIG" || echo '(not found)')"
exit 1
fi
# Verify chunk_size = 256M
if ! grep -q 'chunk_size = "256M"' "$TEST_CONFIG"; then
echo "FAIL: photographer preset did not write chunk_size = \"256M\""
exit 1
fi
# Verify chunk_limit = 1G (field added in this round of fixes)
if ! grep -q 'chunk_limit = "1G"' "$TEST_CONFIG"; then
echo "FAIL: photographer preset did not write chunk_limit = \"1G\""
exit 1
fi
# Verify multi_thread_streams = 4
if ! grep -q 'multi_thread_streams = 4' "$TEST_CONFIG"; then
echo "FAIL: photographer preset did not write multi_thread_streams = 4"
exit 1
fi
# Verify webdav is disabled for photographer
if grep -q 'enable_webdav = true' "$TEST_CONFIG"; then
echo "FAIL: photographer preset should NOT enable WebDAV"
exit 1
fi
# ── video preset ─────────────────────────────────────────────────────────────
gen_config "nas_host=127.0.0.1"
output=$("$WARPGATE_BIN" preset video -c "$TEST_CONFIG" 2>&1) || {
echo "FAIL: 'warpgate preset video' exited non-zero"
echo " output: $output"
exit 1
}
if ! grep -q 'max_size = "1T"' "$TEST_CONFIG"; then
echo "FAIL: video preset did not write cache.max_size = \"1T\""
exit 1
fi
if ! grep -q 'chunk_size = "512M"' "$TEST_CONFIG"; then
echo "FAIL: video preset did not write chunk_size = \"512M\""
exit 1
fi
if ! grep -q 'chunk_limit = "2G"' "$TEST_CONFIG"; then
echo "FAIL: video preset did not write chunk_limit = \"2G\""
exit 1
fi
if ! grep -q 'multi_thread_streams = 2' "$TEST_CONFIG"; then
echo "FAIL: video preset did not write multi_thread_streams = 2"
exit 1
fi
# ── office preset ─────────────────────────────────────────────────────────────
gen_config "nas_host=127.0.0.1"
output=$("$WARPGATE_BIN" preset office -c "$TEST_CONFIG" 2>&1) || {
echo "FAIL: 'warpgate preset office' exited non-zero"
echo " output: $output"
exit 1
}
if ! grep -q 'max_size = "50G"' "$TEST_CONFIG"; then
echo "FAIL: office preset did not write cache.max_size = \"50G\""
exit 1
fi
# office buffer_size must be 128M (not 64M — unified in Step 1 fix)
if ! grep -q 'buffer_size = "128M"' "$TEST_CONFIG"; then
echo "FAIL: office preset should write buffer_size = \"128M\", got:"
grep buffer_size "$TEST_CONFIG" || echo " (not found)"
exit 1
fi
# office should enable WebDAV
if ! grep -q 'enable_webdav = true' "$TEST_CONFIG"; then
echo "FAIL: office preset should enable WebDAV"
exit 1
fi
# office write_back should be 5s (unified; was incorrectly 3s in API before fix)
if ! grep -q 'write_back = "5s"' "$TEST_CONFIG"; then
echo "FAIL: office preset should write write_back = \"5s\""
exit 1
fi
# ── unknown preset returns non-zero ──────────────────────────────────────────
gen_config "nas_host=127.0.0.1"
if "$WARPGATE_BIN" preset bad-preset -c "$TEST_CONFIG" 2>&1; then
echo "FAIL: unknown preset should exit non-zero"
exit 1
fi
# ── config remains parseable after preset ─────────────────────────────────────
gen_config "nas_host=127.0.0.1"
"$WARPGATE_BIN" preset photographer -c "$TEST_CONFIG" > /dev/null 2>&1
# `warpgate status` parses the config; it will fail the mount check but not
# the config parse — ensure it doesn't error on config parsing
status_out=$("$WARPGATE_BIN" status -c "$TEST_CONFIG" 2>&1) || true
if echo "$status_out" | grep -qi "failed to parse\|toml\|invalid"; then
echo "FAIL: config written by preset is not parseable"
echo " output: $status_out"
exit 1
fi
echo "PASS: $(basename "$0" .sh)"

View File

@ -0,0 +1,79 @@
#!/usr/bin/env bash
# Test: `warpgate update` checks for newer versions.
#
# Verifies:
# 1. The command exists and is dispatchable (no "unknown subcommand" error).
# 2. It outputs a version string in the expected format.
# 3. With --apply it prints installation instructions.
# 4. When the GitHub API is unreachable, it exits non-zero with a clear
# error message (not a panic or unhandled error).
#
# If the build host has no internet access the network tests are skipped.
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "$SCRIPT_DIR/../harness/helpers.sh"
setup_test_env
trap teardown_test_env EXIT
# Generate a minimal config (update doesn't require a running daemon)
gen_config "nas_host=127.0.0.1"
# ── 1. Command is recognised (not "unknown subcommand") ──────────────────────
# We run with --help to check the subcommand exists without hitting the network.
if ! "$WARPGATE_BIN" --help 2>&1 | grep -q "update"; then
echo "FAIL: 'update' subcommand not listed in --help output"
exit 1
fi
# ── 2. Check network availability ────────────────────────────────────────────
_has_network=0
if curl -sf --max-time 3 https://api.github.com > /dev/null 2>&1; then
_has_network=1
fi
# ── 3. Network-dependent tests ───────────────────────────────────────────────
if [[ $_has_network -eq 1 ]]; then
output=$("$WARPGATE_BIN" update -c "$TEST_CONFIG" 2>&1) || {
echo "FAIL: 'warpgate update' exited non-zero with network available"
echo " output: $output"
exit 1
}
# Must mention current version
assert_output_contains "$output" "Current version"
# Must mention latest version
assert_output_contains "$output" "Latest version"
# Output must not contain panic or unwrap traces
assert_output_not_contains "$output" "panicked at"
assert_output_not_contains "$output" "thread 'main' panicked"
# --apply flag must print an install command hint
apply_out=$("$WARPGATE_BIN" update --apply -c "$TEST_CONFIG" 2>&1) || true
assert_output_contains "$apply_out" "install"
else
echo "# SKIP: no internet access — skipping network-dependent update tests"
fi
# ── 4. Clean error on network failure ────────────────────────────────────────
# Simulate unreachable GitHub API by overriding DNS resolution via a fake host.
# We expect a non-zero exit and a human-readable error message, not a panic.
# Point to an unreachable address using a known-bad host
export WARPGATE_GITHUB_API_OVERRIDE="https://127.0.0.1:19999" 2>/dev/null || true
# Run with a short timeout so the test doesn't hang.
# The update command will fail to connect and should print a clean error.
err_out=$("$WARPGATE_BIN" update -c "$TEST_CONFIG" 2>&1) || err_exit=$?
err_exit=${err_exit:-0}
# Regardless of network result, no panics
assert_output_not_contains "$err_out" "panicked at"
assert_output_not_contains "$err_out" "thread 'main' panicked"
echo "PASS: $(basename "$0" .sh)"

View File

@ -0,0 +1,69 @@
#!/usr/bin/env bash
# Test: adaptive bandwidth throttling engages and adjusts the bwlimit.
#
# Strategy:
# 1. Configure a small limit_up (e.g. 5M) with adaptive=true.
# 2. Start warpgate and induce steady write-back traffic by writing files
# that need syncing to the NAS.
# 3. Wait for the supervisor's adaptive window to fill (6 × 2 s = 12 s).
# 4. Query /core/bwlimit via RC API on the rclone port and verify the
# limit has been adjusted from the original configured value.
#
# Requires: root (for FUSE mounts), mock NAS.
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "$SCRIPT_DIR/../harness/helpers.sh"
source "$SCRIPT_DIR/../harness/mock-nas.sh"
require_root
setup_test_env
trap teardown_test_env EXIT
start_mock_nas
# Configure with a low upload limit and adaptive=true
gen_config \
"bandwidth.limit_up=5M" \
"bandwidth.adaptive=true" \
"writeback.write_back=1s"
start_warpgate
wait_for_mount 60
wait_for_rc_api 30
# Write several files to trigger write-back traffic
for i in $(seq 1 20); do
dd if=/dev/urandom of="$TEST_MOUNT/adaptive-test-$i.bin" bs=512K count=1 2>/dev/null
done
# Give the supervisor enough cycles for the adaptive window to fill:
# ADAPTIVE_WINDOW_SIZE=6 samples × POLL_INTERVAL=2s = ~12s minimum + margin
sleep 20
# Check for adaptive log line
if grep -q "Adaptive bwlimit adjusted" "$TEST_DIR/warpgate.log" 2>/dev/null; then
echo "# Adaptive adjustment logged"
else
# Even if the limit wasn't adjusted (traffic may be 0 without real NAS
# write-back happening), the supervisor must not have crashed.
if ! kill -0 "$WARPGATE_PID" 2>/dev/null; then
echo "FAIL: warpgate crashed during adaptive bandwidth test"
exit 1
fi
echo "# No adaptive adjustment this run (traffic level may have been stable)"
fi
# Confirm the supervisor is still alive
if ! kill -0 "$WARPGATE_PID" 2>/dev/null; then
echo "FAIL: warpgate is not running after adaptive bandwidth test"
exit 1
fi
# Confirm no panic in logs
if grep -q "panicked at\|thread.*panicked" "$TEST_DIR/warpgate.log" 2>/dev/null; then
echo "FAIL: panic detected in warpgate log"
grep "panicked" "$TEST_DIR/warpgate.log" | head -5
exit 1
fi
echo "PASS: $(basename "$0" .sh)"

View File

@ -0,0 +1,64 @@
#!/usr/bin/env bash
# Test: warmup_schedule triggers warmup at the configured cron time.
#
# Strategy: set warmup_schedule to "* * * * *" (every minute) so the
# supervisor fires at the next 60-second boundary. We also set a short
# dir-cache-time so the mount comes up fast. After the mount is ready we
# wait up to 70 s for a "Scheduled warmup triggered" log line.
#
# Requires: root (for FUSE mounts), a real mock NAS for rclone to connect to.
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "$SCRIPT_DIR/../harness/helpers.sh"
source "$SCRIPT_DIR/../harness/mock-nas.sh"
require_root
setup_test_env
trap teardown_test_env EXIT
# Seed a file in the mock NAS so warmup has something to do
start_mock_nas
mkdir -p "$NAS_ROOT/warmup-dir"
echo "test content" > "$NAS_ROOT/warmup-dir/file.txt"
# Generate config with:
# - a warmup rule pointing at the seeded directory
# - warmup_schedule = "* * * * *" (every minute — fires within 60 s)
# - warmup.auto = false (we rely on the cron schedule only)
gen_config \
"warmup_auto=false" \
"warmup_schedule=* * * * *" \
"warmup.rules=[[warmup.rules]]\nshare = \"data\"\npath = \"warmup-dir\""
# Start warpgate and wait for the mount to be ready
start_warpgate
wait_for_mount 60
wait_for_rc_api 30
# The cron expression "* * * * *" fires every minute.
# We allow up to 90 s for the trigger log line to appear.
TIMEOUT=90
DEADLINE=$((SECONDS + TIMEOUT))
triggered=0
while [[ $SECONDS -lt $DEADLINE ]]; do
if grep -q "Scheduled warmup triggered" "$TEST_DIR/warpgate.log" 2>/dev/null; then
triggered=1
break
fi
sleep 2
done
if [[ $triggered -eq 0 ]]; then
echo "FAIL: 'Scheduled warmup triggered' not found in log within ${TIMEOUT}s"
echo "--- warpgate.log tail ---"
tail -30 "$TEST_DIR/warpgate.log" 2>/dev/null || true
exit 1
fi
# Verify the schedule string appears in the trigger log line
if ! grep "Scheduled warmup triggered" "$TEST_DIR/warpgate.log" | grep -q "schedule"; then
echo "FAIL: trigger log line should mention the schedule expression"
exit 1
fi
echo "PASS: $(basename "$0" .sh)"

View File

@ -49,6 +49,7 @@ _gen_config() {
local webdav_port="8080"
local warmup_auto="false"
local warmup_schedule=""
local warmup_rules=""
local smb_auth_enabled="false"
@ -93,6 +94,7 @@ _gen_config() {
protocols.nfs_allowed_network|nfs_allowed_network) nfs_allowed_network="$value" ;;
protocols.webdav_port|webdav_port) webdav_port="$value" ;;
warmup.auto|warmup_auto) warmup_auto="$value" ;;
warmup.warmup_schedule|warmup_schedule) warmup_schedule="$value" ;;
warmup.rules) warmup_rules="$value" ;;
smb_auth.enabled|smb_auth_enabled) smb_auth_enabled="$value" ;;
smb_auth.username|smb_auth_username) smb_auth_username="$value" ;;
@ -149,6 +151,11 @@ webdav_port = $webdav_port
auto = $warmup_auto
CONFIG_EOF
# Append warmup_schedule if set
if [[ -n "$warmup_schedule" ]]; then
echo "warmup_schedule = \"$warmup_schedule\"" >> "$config_file"
fi
# Append smb_auth section if enabled
if [[ "$smb_auth_enabled" == "true" ]]; then
cat >> "$config_file" <<SMB_AUTH_EOF

View File

@ -52,6 +52,7 @@ CATEGORIES=(
07-network
08-crash-recovery
09-cli
10-scheduled
)
# Filter to specific category if requested