Initial release: PSQL DB dump NFS backup automation

This commit is contained in:
2026-04-10 10:01:53 +03:00
commit 76e8c2aee2
5 changed files with 367 additions and 0 deletions
+29
View File
@@ -0,0 +1,29 @@
# Build and Release Folders
bin-debug/
bin-release/
[Oo]bj/
[Bb]in/
# Other files and folders
.settings/
logs/
.logs/
sps_logs/*
.sps_logs/*
# Executables
*.conf
*.swf
*.air
*.ipa
*.apk
*.log
*.tmp
*.log*
*.html*
*tmp_*
*variables*
# Project files, i.e. `.project`, `.actionScriptProperties` and `.flexProperties`
# should NOT be excluded as they contain compiler settings and other important
# information for Eclipse / Flash Builder.
+21
View File
@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2026 Andrii Syrovatko
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
+95
View File
@@ -0,0 +1,95 @@
# 💾 PostgreSQL Backup Automator (Bash)
### 🚀 The Problem
In ISP environments, database backups are critical. Relying on manual exports is risky, and standard tools often lack built-in monitoring, retention policies, and smart remote storage integration.
### 🛠 The Solution
This script provides a robust automation layer for `pg_basebackup`. It ensures backups are created, transferred to remote **NFS storage**, monitored via **Telegram/Email**, and rotated automatically to save space.
### Key Features:
***NFS-Aware:** Automatically checks, mounts, and verifies remote storage before starting the dump.
***Concurrency Control:** Uses `flock` to prevent process overlapping and race conditions.
***Health Monitoring:** Real-time Telegram alerts for failures and detailed Email reports for successful runs.
***Retention Management:** Automatically purges old backups based on a configurable RETENTION_DAYS policy.
***Dry-Run Mode:** Safe debugging mode (`DEBUG=1`) to test logic without affecting data.
***Dependency Check:** Built-in verification for pg_basebackup, curl, and mailutils.
### 📦 Dependencies & Requirements
| Component | Ubuntu/Debian | RHEL/CentOS/Rocky |
| :--- | :--- | :--- |
| **NFS Server** | `nfs-kernel-server` | `nfs-utils` |
| **NFS Client** | `nfs-common` | `nfs-utils` + `rpcbind` |
| **PostgreSQL** | `postgresql-client` | `postgresql` |
| **Reports** | `curl`, `mailutils` | `curl`, `mailx` |
### 📖 Usage (locally)
1. Clone the repository and navigate to the directory.
2. Create your configuration from the template:
```bash
cp db_backuper.conf.example db_backuper.conf
```
3. Edit `db_backuper.conf` with your DB credentials, NFS paths, and API tokens.
4. Add to your crontab (e.g., daily at 02:00):
```bash
# crontab -e (edit crontab)
0 2 * * * /path/to/db_backuper.sh
```
### 📁 Remote Storage Setup (NFS Guide)
To use the remote backup feature, follow these steps to configure your NFS environment.
1. On the Backup Server (Storage)
Add the client's IP to your exports file:
```bash
# Edit /etc/exports (Path to the backup folder and the DB server IP):
/backups/your_db_path 192.168.X.X (rw,sync,no_root_squash,no_subtree_check)
# Where 192.168.X.X is the IP of your DB/Billing server.
```
Restart services to apply changes:
```bash
sudo systemctl restart rpcbind nfs-kernel-server
```
**Firewall Configuration** (`iptables`):
Ensure ports `111` and `2049` (TCP/UDP) are open for the client IP:
```bash
iptables -A INPUT -s 192.168.X.X/32 -p tcp --dport 111 -j ACCEPT
iptables -A INPUT -s 192.168.X.X/32 -p udp --dport 111 -j ACCEPT
iptables -A INPUT -s 192.168.X.X/32 -p tcp --dport 2049 -j ACCEPT
iptables -A INPUT -s 192.168.X.X/32 -p udp --dport 2049 -j ACCEPT
```
**For Remote Server need to install NFS-kernel if not installed yet:**
**Ubuntu/Debian:**
```bash
sudo apt update && sudo apt install nfs-kernel-server -y
```
**RHEL/CentOS:**
```bash
sudo yum install nfs-utils -y
```
2. On the Client Side (Database Server)
The script handles mounting automatically, but if you want to persist the mount or test it manually:
```bash
# Manual mount
sudo mount -t nfs 192.168.X.Y:/backups/your_db_path /var/db_backups_via_nfs
# Permanent mount via /etc/fstab
192.168.X.Y:/backups/your_db_path /var/db_backups_via_nfs nfs defaults,timeo=900,retrans=5,_netdev 0 0
# Where 192.168.X.Y - IP of the server with NFS-share folder from the 1st step.
```
## For **Client Side (Database Server)** need to install NFS-common if not installed yet:
**Ubuntu/Debian:**
```bash
sudo apt update && sudo apt install nfs-common -y
```
**RHEL/CentOS:**
```bash
sudo yum install nfs-utils -y
sudo systemctl enable --now rpcbind
```
### ⚖️ License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
### ⚠️ Disclaimer:
**Use at your own risk! The author is not responsible for any data loss or infrastructure downtime.**
+25
View File
@@ -0,0 +1,25 @@
# Configuration for db_backuper.sh
S_HOSTNAME="your_server_hostname" # For reports subjects info (can be any, not just a hostname).
LOCK_FILE="/var/run/db_backuper.lock" # Lockfile location (1 script process per time).
CACHE_FILE="/tmp/db_backuper_cache" # TMP log-file (will be sent as a report).
PSQL_USER="postgres" # PSQL user (which has access to create a dump of DB).
PSQL_PORT="5432" # Default PostgreSQL port.
PSQL_CHECKPOINT="fast" # fast - immediate start (high I/O load) | spread - gradual start (safe for production).
NFS_SERVER_IP="192.168.x.x" # Paste your server IP here.
NFS_SERVER_DIR="/backups/your_path" # Remote server dir path (which will be mounted via script process).
RETENTION_DAYS=5 # How many days dumps are saved.
DEBUG=1 # 0 - WORKING MODE | 1 - DRY-RUN MODE \
# (do not create folders, do not create dumps, do not remove old folders).
# Configuration of paths
IS_LOCAL_BACKUP=false # true - for local dump saves | false - push dumps remotely via NFS.
MNT_POINT="/mnt/db_backups" # Target folder for dumps (local path or NFS mount point).
# DB dump process output view
EXTENDED_BACK_STATUS=true # true - use -P parameter for psql dump (extended output) | false - no extended status.
# Secrets (for mail and telegram reports)
TG_BOT_ID="XXXXXXXXXX:XXXXXXX...XXXXXXXXXXXXXXXXX" # Your BOT token ID.
TG_CHAT_ID="-XXXXXXXXXXXXX" # Your CHAT ID where bot is.
MAIL_SENDER="-aFrom:root@yourdomain.com" # If need type mail sender address.
MAIL_RECEIVER="support@yourdomain.com" # Mail reports receiver.
+197
View File
@@ -0,0 +1,197 @@
#!/usr/bin/env bash
# =============================================================================
# Script Name : db_backuper.sh
# Description : Backup PostgreSQL database called by cron or manually.
# Usage : ./db_backuper
# Author : syr4ok (Andrii Syrovatko)
# Version : 2.1.3r
# =============================================================================
# Stop script on pipeline errors
set -o pipefail
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
# --- LOADING CONFIGURATION ---
DATE_NOW=$(date +%y%m%d)
CONFIG_FILE="$(dirname "$0")/db_backuper.conf"
if [[ -f "$CONFIG_FILE" ]]; then
# shellcheck source=/dev/null
source "$CONFIG_FILE"
else
echo "Error: Configuration file not found. Create db_backuper.conf from example."
exit 1
fi
# --- DEPENDENCY CHECK ---
# Check critical tool (if not present - exit)
if ! command -v "${PSQL_COMMAND}" &> /dev/null; then
PSQL_COMMAND=$(which pg_basebackup 2>/dev/null)
if [ -z "$PSQL_COMMAND" ]; then
echo "❌ Critical Error: pg_basebackup not found! Backup impossible."
exit 1
fi
fi
# Checking optional tools (Curl (for Telegram) / Mail)
HAS_CURL=true
HAS_MAIL=true
command -v curl &> /dev/null || HAS_CURL=false
command -v mail &> /dev/null || HAS_MAIL=false
# --- Main functions ---
log_divider() {
local LABEL=$1
echo "----------------------- $LABEL $(date +%Y/%m/%d-%H:%M) $LABEL -----------------------"
}
send_tg() {
if [ "$HAS_CURL" = true ]; then
/usr/bin/curl -s -X POST "https://api.telegram.org/bot${TG_BOT_ID}/sendMessage" -d chat_id=${TG_CHAT_ID} -d text="[${S_HOSTNAME}]: $1"
else
echo "⚠️ Warning: Telegram report skipped (curl not installed)." | tee -a "${CACHE_FILE}"
fi
}
send_report() {
local STATUS=$1
local EMOJI="💾"
[ "$STATUS" == "ERROR" ] && EMOJI="❌"
log_divider "END" >> "${CACHE_FILE}"
if [ "$HAS_MAIL" = true ]; then
local SUBJECT
SUBJECT="[${S_HOSTNAME}] ${EMOJI} ${STATUS} DB Backup Info - $(date +%Y/%m/%d-%H:%M)"
cat "${CACHE_FILE}" | mail -s "$SUBJECT" "$MAIL_SENDER" "$MAIL_RECEIVER"
else
echo "⚠️ Warning: Email report skipped (mailutils not installed)." | tee -a "${CACHE_FILE}"
fi
}
# --- BLOCKING THE SCRIPT (only one working process per time)---
exec 200>"$LOCK_FILE"
if ! flock -n 200; then
msg="❌ The script is already running in another process. Exit!"
echo "$msg" | tee -a "${CACHE_FILE}"
send_tg "$msg"
send_report "ERROR"
exit 1
fi
# --- START ---
# 1. Preparing the log file
[ ! -f "${CACHE_FILE}" ] && touch "${CACHE_FILE}" && chmod 660 "${CACHE_FILE}"
log_divider "START" > "${CACHE_FILE}"
# 2. Checking and mounting NFS (only if IS_LOCAL_BACKUP=false)
DB_DIR="$MNT_POINT/psql_db_$DATE_NOW"
TMP_MSG="locally"
if [ "$IS_LOCAL_BACKUP" = false ]; then
TMP_MSG="on NFS"
if ! command -v mount.nfs &> /dev/null; then
msg="❌ Critical Error: nfs-common (mount.nfs) is not installed! Remote backup impossible."
echo "$msg" | tee -a "${CACHE_FILE}"
send_tg "$msg"
send_report "ERROR"
exit 1
fi
if ! mountpoint -q "$MNT_POINT"; then
echo "Attempting to mount NFS..." >> "${CACHE_FILE}"
if ! mount -t nfs "${NFS_SERVER_IP}:${NFS_SERVER_DIR}" "$MNT_POINT" -o soft,timeo=30,retrans=2; then
msg="❌ NFS Mount Failed! Server ${NFS_SERVER_IP} unreachable."
echo "$msg" | tee -a "${CACHE_FILE}"
send_tg "$msg"
send_report "ERROR"
exit 1
fi
fi
fi
# 3. Check for duplicate directory
if [ -d "${DB_DIR}" ]; then
msg="⚠️ DB backup stopped. Today's dir ($DB_DIR) already exists!"
echo "$msg" | tee -a "${CACHE_FILE}"
send_tg "$msg"
send_report "ERROR"
exit 1
fi
# 4. Creating a directory
if [ "$DEBUG" -eq 1 ]; then
echo "DEBUG: [DRY RUN] Skipping directory creation: ${DB_DIR}" >> "${CACHE_FILE}"
else
if ! mkdir -p "${DB_DIR}"; then
msg="❌ Failed to create directory ${DB_DIR} ${TMP_MSG}."
echo "$msg" | tee -a "${CACHE_FILE}"
send_tg "$msg"
send_report "ERROR"
exit 1
fi
chown postgres:postgres "${DB_DIR}"
fi
# 5. Starting a backup process
BACKUP_SUCCESS=false
{
[ "$DEBUG" -eq 1 ] && echo "--- DEBUG MODE ON (DRY RUN) ---"
echo "Backup DB STARTED at $(date +%Y/%m/%d-%H:%M)"
} >> "${CACHE_FILE}"
if [ "$DEBUG" -eq 1 ]; then
echo "DEBUG: Skipping real pg_basebackup command..." >> "${CACHE_FILE}"
BACKUP_SUCCESS=true # True - to read old files.
else
if [ "$EXTENDED_BACK_STATUS" = true ]; then
EXT_STATUS_MSG='-P'
else
EXT_STATUS_MSG=''
fi
echo -e "Using command for DB dump:\ncd /tmp/ && sudo -u ${PSQL_USER} ${PSQL_COMMAND} -p ${PSQL_PORT} -D ${DB_DIR} --checkpoint=${PSQL_CHECKPOINT} -Ft -z ${EXT_STATUS_MSG} 2>&1" | tee -a "${CACHE_FILE}"
if cd /tmp/ && sudo -u "${PSQL_USER}" "${PSQL_COMMAND}" -p "${PSQL_PORT}" -D "${DB_DIR}" --checkpoint="${PSQL_CHECKPOINT}" -Ft -z ${EXT_STATUS_MSG} 2>&1 | tee -a "${CACHE_FILE}"; then
# Checking whether the file was actually created (additional security measure)
if [ -d "${DB_DIR}" ]; then
BACKUP_SUCCESS=true
sync
DUMP_SIZE=$(du -sh "${DB_DIR}" 2>/dev/null | cut -f1)
echo -e "Files synced and compressed!\nDB dump size: ${DUMP_SIZE}" | tee -a "${CACHE_FILE}"
fi
fi
fi
# 6. Cleaning up old backups
if [ "$BACKUP_SUCCESS" = true ]; then
echo "Cleaning old backups ${TMP_MSG} (Retention: ${RETENTION_DAYS} days)..." | tee -a "${CACHE_FILE}"
mapfile -t OLD_BACKUPS < <(find "${MNT_POINT}" -mindepth 1 -maxdepth 1 -name "psql_db*" -mtime +"${RETENTION_DAYS}" -print)
if [ ${#OLD_BACKUPS[@]} -gt 0 ]; then
echo "Found ${#OLD_BACKUPS[@]} old backup(s) for deletion:" | tee -a "${CACHE_FILE}"
for dir in "${OLD_BACKUPS[@]}"; do
if [ "$DEBUG" -eq 1 ]; then
echo "DEBUG: [DRY RUN] Would delete: $dir" | tee -a "${CACHE_FILE}"
else
echo "Deleting: $dir" | tee -a "${CACHE_FILE}"
rm -rfv "$dir" | tee -a "${CACHE_FILE}" 2>&1
fi
done
echo "Cleanup finished." | tee -a "${CACHE_FILE}"
else
echo "No old backups found older than ${RETENTION_DAYS} days." | tee -a "${CACHE_FILE}"
fi
sync
send_report "SUCCESS"
else
msg="❌ Backup process failed!"
echo "$msg" | tee -a "${CACHE_FILE}"
send_tg "$msg"
send_report "ERROR"
exit 1
fi
# 7. Complete and exit
exit 0