mirror of
https://github.com/andsyrovatko/s4k-pve-rsync-backup.git
synced 2026-04-21 22:08:53 +02:00
feat: initial release of pve-rsync-backup script
- Added PVE vzdump log parsing with awk - Implemented rsync transfer with filelist generation - Added safety triggers for remote cleanup - Added dependency and SSH key validation"
This commit is contained in:
@@ -0,0 +1,72 @@
|
||||
# 💾 PVE Rsync Backup Pro (Bash)
|
||||
|
||||
---
|
||||
|
||||
### 🚀 The Problem
|
||||
Standard Proxmox backup solutions often leave temporary dumps on the node or lack flexibility in how files are transferred to offsite storage. In high-density ISP or homelab environments, you need a surgical approach: back up a specific VM, transfer it immediately, and leave zero traces on the source node.
|
||||
|
||||
### 🛠 The Solution
|
||||
This script is a "surgical" automation layer for Proxmox vzdump. Unlike generic scripts, it **parses the real-time log** to identify the exact archive path, transfers it via `rsync`, and performs a verified cleanup. It’s designed for those who value storage efficiency and data portability.
|
||||
|
||||
### 🔑 Key Features:
|
||||
* **Precision Extraction:** Uses `awk` to grab the exact `.vma.zst` path from the vzdump output. No more "guessing" based on timestamps.
|
||||
* **Surgical Cleanup:** Automatically removes the remote dump and its log file from the PVE node only after a successful transfer.
|
||||
* **Safety Triggers:** Built-in validation ensures the script never executes `rm` on directories or incorrect file types.
|
||||
* **Rsync-Powered:** Uses `files-from` logic for atomic transfers of both VM configurations and disk images.
|
||||
* **ISP-Grade Logging**: Detailed logs with timestamping, console output, and integration with `syslog` (via `logger`).
|
||||
* **Node Whitelisting:** Verification of the node name to prevent accidental execution on the wrong host.
|
||||
|
||||
### 📦 Dependencies & Requirements
|
||||
|
||||
| Component | Role | Requirement |
|
||||
| :--- | :--- | :--- |
|
||||
| **SSH Keys** | Authentication | Passwordless root/sudo access to PVE |
|
||||
| **Sudo** | Permissions | Access to `qm`, `vzdump`, and `rm` in the **sudoers** of the PVE node |
|
||||
| **Rsync** | Transfer | Installed on both Local & Remote nodes |
|
||||
| **Mail** | Reporting | `mailutils` or `bsd-mailx` (optional) |
|
||||
|
||||
### 📖 Usage
|
||||
1. Initial Setup:
|
||||
Clone the script, make it executable and create your configuration file:
|
||||
```bash
|
||||
chmod +x pve-rsync-backup.sh
|
||||
cp pve-rsync-backup.conf.example pve-rsync-backup.conf
|
||||
```
|
||||
2. Configure:
|
||||
Edit `pve-rsync-backup.conf` with your SSH keys, allowed nodes, and rsync modules.
|
||||
3. Run the Backup task or add it to the `crontab`:
|
||||
```bash
|
||||
# Usage: ./pve-rsync-backup.sh <Node_IP/FQDN> <VM_ID>
|
||||
./pve-rsync-backup.sh 192.168.1.10 101
|
||||
```
|
||||
|
||||
### 📁 Backup Structure
|
||||
The script organizes backups by node and VM ID for easy navigation:
|
||||
```
|
||||
/backups/
|
||||
└── node-name/ <-- Node name used from 192.168.1.10
|
||||
└── vm101/ <-- VM ID
|
||||
├── 20260415_1453/ <-- Current backup
|
||||
│ ├── 101-filelist.txt <-- Rsync list to backup
|
||||
│ ├── Local_README.md <-- Short instruction to restore VM from dump
|
||||
│ ├── etc/
|
||||
│ │ └── pve/
|
||||
│ │ └── nodes/
|
||||
│ │ └── node-name/
|
||||
│ │ └── qemu-server/
|
||||
│ │ └── 101.conf <-- VM conf file
|
||||
│ └── storage-name/
|
||||
│ └── dump/
|
||||
│ ├── vzdump-qemu-101...vma.zst <-- VM dump file
|
||||
│ └── vzdump-qemu-101...log <-- VM dump log
|
||||
└── logs/ <-- Historical logs
|
||||
└── 20260415_1453-backup.log <-- Current backup task log
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### ⚖️ License
|
||||
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
|
||||
|
||||
### ⚠️ Disclaimer:
|
||||
**Use at your own risk! This script performs `sudo rm` operations on remote hosts. Always test in a staging environment (e.g., on a non-critical VM) before adding to production crontab.**
|
||||
Reference in New Issue
Block a user