Compare commits

..

No commits in common. "master" and "v1.3-beta1" have entirely different histories.

52 changed files with 10608 additions and 10580 deletions

View File

@ -1,33 +0,0 @@
---
name: Bug report
about: Create a report to help us improve
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Setup osync with the following config file / the following parameters (please provide either anonymized)
2. Run osync with following parameters
3. Result
**Expected behavior**
A clear and concise description of what you expected to happen.
** Deviated behavior**
How does the actual result deviate from the expected behavior.
**Logs**
Please send logs of what happens.
Also, you might run osync with _DEBUG=yes environement variable to have more verbose debug logs.
**Environment (please complete the following information):**
- Full osync version (including build)
- OS: [e.g. iOS]
- Bitness [e.g. x64 or x86]
- Shell (busybox or else)
**Additional context**
Add any other context about the problem here.

View File

@ -1,25 +0,0 @@
# Codespell configuration is within .codespellrc
---
name: Codespell
on:
push:
branches: [main]
pull_request:
branches: [main]
permissions:
contents: read
jobs:
codespell:
name: Check for spelling errors
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Annotate locations with typos
uses: codespell-project/codespell-problem-matcher@v1
- name: Codespell
uses: codespell-project/actions-codespell@v2

View File

@ -1,25 +0,0 @@
name: linux-tests
on: [push, pull_request]
jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest]
steps:
- uses: actions/checkout@v2
- name: Install dependencies
run: |
sudo apt-get install inotify-tools acl
- name: Execute tests and generate coverage report
run: |
export RUNNING_ON_GITHUB_ACTIONS=true
export SSH_PORT=22
echo "Running on github actions: ${RUNNING_ON_GITHUB_ACTIONS}"
echo "Running on ssh port ${SSH_PORT}"
sudo -E bash ./dev/tests/run_tests.sh
- name: Upload Coverage to Codecov
uses: codecov/codecov-action@v1

View File

@ -1,28 +0,0 @@
name: macosx-tests
on: [push, pull_request]
jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [macos-latest]
steps:
- uses: actions/checkout@v2
- name: Install Bash 4
run: |
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew update
brew install bash
brew install fswatch
echo "/usr/local/bin" >> $GITHUB_PATH
- name: Execute tests and generate coverage report
run: |
export RUNNING_ON_GITHUB_ACTIONS=true
export SSH_PORT=22
sudo -E bash ./dev/tests/run_tests.sh
- name: Upload Coverage to Codecov
uses: codecov/codecov-action@v1

View File

@ -1,29 +0,0 @@
name: windows-tests
on: [push, pull_request]
jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [windows-latest]
steps:
- uses: actions/checkout@v2
- uses: Vampire/setup-wsl@v1
with:
additional-packages:
dos2unix
rsync
openssh-server
- name: Execute tests and generate coverage report
shell: wsl-bash {0}
run: |
export RUNNING_ON_GITHUB_ACTIONS=true
export SSH_PORT=22
find ./ -type f ! -path "./.git/*" -print0 | xargs -0 -n 1 -P 4 dos2unix
service ssh start
./dev/tests/run_tests.sh
- name: Upload Coverage to Codecov
uses: codecov/codecov-action@v1

130
CHANGELOG.md Executable file → Normal file
View File

@ -1,63 +1,15 @@
## RECENT CHANGES RECENT CHANGES
--------------
### Current master dd Mmm YYYY: osync v1.3 release (for full changelog since v1.2 branch see all v1.3-beta/RC entries)
- Make --log-conflicts non experimental (randomly fails) dd Mmm YYYY: osync v1.3-RC1 release
- ! new option FORCE_CONFLICT_PREVALANCE which will always use Initiator or Target, regardless of best time
- ! target-helper: destination mails etc on target, also, no cmd after on configs
### 16 June 2023: osync v1.3 release (for full changelog since v1.2 branch see all v1.3-beta/RC entries) ! New option --sync=bidir|initator2target|target2initiator #147
! new option FORCE_CONFLICT_PREVALANCE which will always use Initiator or Target, regardless of best time
! Vercomp function is now BusyBox compatible
- Fix for new RSYNC protocol 08 Aug 2018: osync v1.3-beta1 release
- New options ALWAYS_SEND_MAILS to allow sending logs regardless of execution states
### 29 June 2020: osync v1.3-RC1 release
- New option to use SSH_CONTROLMASTER in order to speed up remote sync tasks and preserve a single ssh channel
- New option SSH_OPTIONAL_ARGS
- Fixed a problem with macos mv not preserving ownership of files from /tmp
- Fixed very long outstanding issue with special characters in remote target handling
- Fixed an issue where STOP_ON_ERROR_CMD did not work anymore
- Fixed a remote file lock problem (thanks to https://github.com/zhangzhishan)
- Fixed various cosmetic issues with code and logs
- Improved upgrade script
- Fixed a possible bash buffer overflow when synchronizing large filesets (tested with 2M files)
- This fix actually truncats every string sent to Logger not being more than 16KB
- Fixed osync leaving temporary log files behind in RUN_DIR (/tmp by default)
- Updated target helper service configuration file
- Improved codacy results
- Added more debugging
- Fixed service logs being junked by spinner
- Fixed MINIMUM_SPACE=0 didn't stop the disk space check anymore (Thanks to Val)
- Fixed conflict file logs to be less verbose when no conflicts happen
### 22 May 2019: osync v1.3-beta3 release
- Config file update script fixes
- Removed old Win10 1607 bash fixes to make Win10 1809 work (breaks Win10 1607 beta bash version...Yeah, nothing I can do about that)
### 20 May 2019: osync v1.3-beta2 release
- More --summary statistics
- Config file syntax now uses booleans instead of yes / no (but still accepts old syntax)
- Added boolean update in upgrade script
- Config file revision check
- Added config file revision in upgrade script
- New option --sync-type=initator2target|target2initiator that allows using osync as rsync wrapper for unidirectional sync
- New osync target helper service
- Fixed multiple race conditions in parallel executions (which also fixes random conflict logs failures)
- Fixed directory softdeletion bug
- Fixed multiple failed deletions will be retried as many times as failures happened
- Fixed remote running on FreeBSD for some commands, thanks to Vladimirek
- Fixed (again) deletion propagation when file contains spaces (thanks to http://github.com/weinhold)
- Deprecated --log-conflicts for 1.3 branch (is now experimental)
- Updated ofunctions
- Has better random number generator
- IsInteger, IsNumeric and IsNumericExpand are now busybox compatible
- Multiple installer fixes
- Multiple batch fixes
### 08 Aug 2018: osync v1.3-beta1 release
- Added an option to log conflictual files - Added an option to log conflictual files
- Presence of conflictual files can trigger a special mail - Presence of conflictual files can trigger a special mail
@ -88,20 +40,20 @@
- Upgraded shunit2 test framework to v2.1.8pre (git commit 07bb329) - Upgraded shunit2 test framework to v2.1.8pre (git commit 07bb329)
- Multiple smaller fixes and improvements - Multiple smaller fixes and improvements
### 25 Mar 2017: osync v1.2 release (for full changelog of v1.2 branch see all v1.2-beta/RC entries) 25 Mar 2017: osync v1.2 release (for full changelog of v1.2 branch see all v1.2-beta/RC entries)
- Check for initiator directory before launching monitor mode - Check for initiator directory before launching monitor mode
- Updated RPM spec file (Thanks to https://github.com/liger1978) - Updated RPM spec file (Thanks to https://github.com/liger1978)
- Fixed remote commands can be run on local runs and obviously fail - Fixed remote commands can be run on local runs and obviously fail
- Minor fixes in installer logic - Minor fixes in installer logic
### 10 Feb 2017: osync v1.2-RC3 release 10 Feb 2017: osync v1.2-RC3 release
- Uninstaller skips ssh_filter if needed by other program (osync/obackup) - Uninstaller skips ssh_filter if needed by other program (osync/obackup)
- Logger now automatically obfuscates _REMOTE_TOKEN - Logger now automatically obfuscates _REMOTE_TOKEN
- Logger doesn't show failed commands in stdout, only logs them - Logger doesn't show failed commands in stdout, only logs them
### 08 Feb 2017: osync v1.2-RC2 release 08 Feb 2017: osync v1.2-RC2 release
- Tests have run on CentOS 5,7 and 7, Debian 8, Linux Mint 18, Fedora 25, FreeBSD 10.3/pfSense, FreeBSD 11, MacOSX Sierra, Win10 1607 (14393.479) bash, Cygwin x64 and MSYS2 current - Tests have run on CentOS 5,7 and 7, Debian 8, Linux Mint 18, Fedora 25, FreeBSD 10.3/pfSense, FreeBSD 11, MacOSX Sierra, Win10 1607 (14393.479) bash, Cygwin x64 and MSYS2 current
- Hugely improved ssh_filter - Hugely improved ssh_filter
@ -112,7 +64,7 @@
- Fixed installer statistics don't report OS - Fixed installer statistics don't report OS
- Minor tweaks and fixes in ofunctions - Minor tweaks and fixes in ofunctions
### 13 Dec 2016: osync v1.2-RC1 release 13 Dec 2016: osync v1.2-RC1 release
- Unit tests have run on CentOS 5,6 and 7, Debian 8, Linux Mint 18, FreeBSD 10.3/pfSense, FreeBSD 11, MacOSX Sierra, Win10 1607 (14393.479) bash, Cygwin x64 and MSYS2 current - Unit tests have run on CentOS 5,6 and 7, Debian 8, Linux Mint 18, FreeBSD 10.3/pfSense, FreeBSD 11, MacOSX Sierra, Win10 1607 (14393.479) bash, Cygwin x64 and MSYS2 current
- Added optional rsync arguments configuration value - Added optional rsync arguments configuration value
@ -161,7 +113,7 @@
- More code compliance - More code compliance
- Lots of minor fixes - Lots of minor fixes
### 19 Nov 2016: osync v1.2-beta3 re-release 19 Nov 2016: osync v1.2-beta3 re-release
- Fixed blocker bug where local tests tried GetRemoteOS Anyway - Fixed blocker bug where local tests tried GetRemoteOS Anyway
- Fixed CentOS 5 compatibility bug for checking disk space introduced in beta3 - Fixed CentOS 5 compatibility bug for checking disk space introduced in beta3
@ -169,7 +121,7 @@
- Made unit tests clean authorized_keys file after usage - Made unit tests clean authorized_keys file after usage
- Added local unit test where remote OS connection would fail - Added local unit test where remote OS connection would fail
### 18 Nov 2016: osync v1.2-beta3 released 18 Nov 2016: osync v1.2-beta3 released
- Improved locking / unlocking replicas - Improved locking / unlocking replicas
- Fixed killing local pid that has lock bug introduced in v1.2 rewrite - Fixed killing local pid that has lock bug introduced in v1.2 rewrite
@ -199,16 +151,14 @@
- Simplified logger - Simplified logger
- All fixes from v1.1.5 - All fixes from v1.1.5
### 17 Oct 2016: osync v1.2-beta2 released 17 Oct 2016: osync v1.2-beta2 released
- osync now propagates symlink deletions and moves symlinks without referrents to deletion dir - osync now propagates symlink deletions and moves symlinks without referrents to deletion dir
- Upgrade script now has the ability to add any missing value - Upgrade script now has the ability to add any missing value
- Improved unit tests - Improved unit tests
- Added upgrade script test - Added upgrade script test
- Added deletion propagation tests - Added deletion propagation tests
### 30 Aug 2016: osync v1.2-beta released 30 Aug 2016: osync v1.2-beta released
- Rendered more recent code compatible with bash 3.2+ - Rendered more recent code compatible with bash 3.2+
- Added a PKGBUILD file for ArchLinux thanks to Shadowigor (https://github.com/shaodwigor). Builds available at https://aur.archlinux.org/packages/osync/ - Added a PKGBUILD file for ArchLinux thanks to Shadowigor (https://github.com/shaodwigor). Builds available at https://aur.archlinux.org/packages/osync/
- Some more code compliance & more paranoia checks - Some more code compliance & more paranoia checks
@ -235,8 +185,7 @@
- Added KillAllChilds function to accept multiple pids - Added KillAllChilds function to accept multiple pids
- Improved logging - Improved logging
### 17 Nov 2016: osync v1.1.5 released 17 Nov 2016: osync v1.1.5 released
- Backported unit tests from v1.2-beta allowing to fix the following - Backported unit tests from v1.2-beta allowing to fix the following
- Allow quicksync mode to specify rsync include / exclude patterns as environment variables - Allow quicksync mode to specify rsync include / exclude patterns as environment variables
- Added default path separator char in quicksync mode for multiple includes / exclusions - Added default path separator char in quicksync mode for multiple includes / exclusions
@ -245,30 +194,25 @@
- Fixed error alerts cannot be triggered from subprocesses - Fixed error alerts cannot be triggered from subprocesses
- Fixed remote locked targets are unlocked in any case - Fixed remote locked targets are unlocked in any case
### 10 Nov 2016: osync v1.1.4 released 10 Nov 2016: osync v1.1.4 released
- Fixed a corner case with sending alerts with logfile attachments when osync is used by multiple users - Fixed a corner case with sending alerts with logfile attachments when osync is used by multiple users
### 02 Sep 2016: osync v1.1.3 released 02 Sep 2016: osync v1.1.3 released
- Fixed installer for CYGWIN / MSYS environment - Fixed installer for CYGWIN / MSYS environment
### 28 Aug 2016: osync v1.1.2 released 28 Aug 2016: osync v1.1.2 released
- Renamed sync.conf to sync.conf.example (thanks to https://github.com/hortimech) - Renamed sync.conf to sync.conf.example (thanks to https://github.com/hortimech)
- Fixed RunAfterHook may be executed twice - Fixed RunAfterHook may be executed twice
- Fixed soft deletion when SUDO_EXEC is enabled - Fixed soft deletion when SUDO_EXEC is enabled
### 06 Aug 2016: osync v1.1.1 released 06 Aug 2016: osync v1.1.1 released
- Fixed bogus rsync pattern file adding - Fixed bogus rsync pattern file adding
- Fixed soft deletion always enabled on target - Fixed soft deletion always enabled on target
- Fixed problem with attributes file list function - Fixed problem with attributes file list function
- Fixed deletion propagation code - Fixed deletion propagation code
- Fixed missing deletion / backup diretories message in verbose mode - Fixed missing deletion / backup diretories message in verbose mode
### 27 Jul 2016: osync v1.1 released 27 Jul 2016: osync v1.1 released
- More msys and cygwin compatibility - More msys and cygwin compatibility
- Logging begins now before any remote checks - Logging begins now before any remote checks
- Improved process killing and process time control - Improved process killing and process time control
@ -304,10 +248,10 @@
- Uploaded coding style manifest - Uploaded coding style manifest
- Added LSB info to init script for Debian based distros - Added LSB info to init script for Debian based distros
## v0-v1.0x - Jun 2013 - Sep 2015 v0-v1.0x - Jun 2013 - Sep 2015
------------------------------
### 22 Jul. 2015: Osync v1.00a released
22 Jul. 2015: Osync v1.00a released
- Small improvements in osync-batch.sh time management - Small improvements in osync-batch.sh time management
- Improved various logging on error - Improved various logging on error
- Work in progress: Unit tests (intial tests written by onovy, Thanks again!) - Work in progress: Unit tests (intial tests written by onovy, Thanks again!)
@ -323,8 +267,7 @@
- Removed legacy lockfile code from init script - Removed legacy lockfile code from init script
- Removed hardcoded program name from init script - Removed hardcoded program name from init script
### 01 Avr. 2015: Osync v1.00pre 01 Avr. 2015: Osync v1.00pre
- Improved and refactored the soft deletion routine by merging conflict backup and soft deletion - Improved and refactored the soft deletion routine by merging conflict backup and soft deletion
- Reworked soft deletion code to handle a case where a top level directory gets deleted even if the files contained in it are not old enough (this obviously shouldn't happen on most FS) - Reworked soft deletion code to handle a case where a top level directory gets deleted even if the files contained in it are not old enough (this obviously shouldn't happen on most FS)
- Added more logging - Added more logging
@ -358,8 +301,7 @@
- Added a routine that reinjects failed deletions for next run in order to prevent bringing back when deletion failed with permission issues - Added a routine that reinjects failed deletions for next run in order to prevent bringing back when deletion failed with permission issues
- Added treat dir symlink as dir parameter - Added treat dir symlink as dir parameter
### 27 May 2014: Osync 0.99 RC3 27 May 2014: Osync 0.99 RC3
- Additionnal delete fix for *BSD and MSYS (deleted file list not created right) - Additionnal delete fix for *BSD and MSYS (deleted file list not created right)
- Fixed dry mode to use non dry after run treelists to create delete lists - Fixed dry mode to use non dry after run treelists to create delete lists
- Added follow symlink parameter - Added follow symlink parameter
@ -406,8 +348,7 @@
- Added possibility to quick sync two local directories without any prior configuration - Added possibility to quick sync two local directories without any prior configuration
- Added time control on OS detection - Added time control on OS detection
### 02 Nov. 2013: Osync 0.99 RC2 02 Nov. 2013: Osync 0.99 RC2
- Minor improvement on operating system detection - Minor improvement on operating system detection
- Improved RunLocalCommand execution hook - Improved RunLocalCommand execution hook
- Minor improvements on permission checks - Minor improvements on permission checks
@ -432,8 +373,7 @@
- Fixed various typos - Fixed various typos
- Enforced CheckConnectivityRemoteHost and CheckConnectivity3rdPartyHosts checks (if one of these fails, osync is stopped) - Enforced CheckConnectivityRemoteHost and CheckConnectivity3rdPartyHosts checks (if one of these fails, osync is stopped)
### 18 Aug. 2013: Osync 0.99 RC1 18 Aug. 2013: Osync 0.99 RC1
- Added possibility to change default logfile - Added possibility to change default logfile
- Fixed a possible error upon master replica lock check - Fixed a possible error upon master replica lock check
- Fixed exclude directorires with spaces in names generate errros on master replica tree functions - Fixed exclude directorires with spaces in names generate errros on master replica tree functions
@ -444,8 +384,7 @@
- Fixed LoadConfigFile function will not warn on wrong config file - Fixed LoadConfigFile function will not warn on wrong config file
- Added --no-maxtime parameter for sync big changes without enforcing execution time checks - Added --no-maxtime parameter for sync big changes without enforcing execution time checks
### 03 Aug. 2013: beta 3 milestone 03 Aug. 2013: beta 3 milestone
- Softdelete functions do now honor --dry switch - Softdelete functions do now honor --dry switch
- Simplified sync delete functions - Simplified sync delete functions
- Enhanced compatibility with different charsets in filenames - Enhanced compatibility with different charsets in filenames
@ -453,14 +392,12 @@
- Tree functions now honor supplementary rsync arguments - Tree functions now honor supplementary rsync arguments
- Tree functions now honor exclusion lists - Tree functions now honor exclusion lists
### 01 Aug. 2013: beta 2 milestone 01 Aug. 2013: beta 2 milestone
- Fixed an issue with spaces in directory trees - Fixed an issue with spaces in directory trees
- Fixed an issue with recursive directory trees - Fixed an issue with recursive directory trees
- Revamped a bit code to add bash 3.2 compatibility - Revamped a bit code to add bash 3.2 compatibility
### 24 Jul. 2013: beta milestone 24 Jul. 2013: beta milestone
- Fixed some bad error handling in CheckMasterSlaveDirs and LockDirectories - Fixed some bad error handling in CheckMasterSlaveDirs and LockDirectories
- Added support for spaces in sync dirs and exclude lists - Added support for spaces in sync dirs and exclude lists
- Fixed false exit code if no remote slave lock present - Fixed false exit code if no remote slave lock present
@ -490,4 +427,5 @@
- Added soft-deleted items - Added soft-deleted items
- Added backup items in case of conflict - Added backup items in case of conflict
### 19 Jun. 2013: Project begin as Obackup fork 19 Jun. 2013: Project begin as Obackup fork

View File

@ -1,6 +0,0 @@
## KNOWN ISSUES
- Cannot finish sync if one replica contains a directory and the other replica contains a file named the same way (Unix doesn't allow this)
- Daemon mode monitors changes in the whole replica directories, without honoring exclusion lists
- Soft deletion does not honor exclusion lists (ie soft deleted files will be cleaned regardless of any exlude pattern because they are in the deleted folder)
- Colors don't work in mac shell

6
KNOWN_ISSUES.md Normal file
View File

@ -0,0 +1,6 @@
KNOWN ISSUES
------------
- Cannot finish sync if one replica contains a directory and the other replica contains a file named the same way (Unix doesn't allow this)
- Soft deletion does not honor exclusion lists (ie soft deleted files will be cleaned regardless of any exlude pattern because they are in the deleted folder)
- Colors don't work in mac shell

View File

@ -1,4 +1,4 @@
Copyright (c) 2013-2023, Orsiris de Jong. ozy@netpower.fr Copyright (c) 2013-2016, Orsiris de Jong. ozy@netpower.fr
All rights reserved. All rights reserved.
Redistribution and use in source and binary forms, with or without Redistribution and use in source and binary forms, with or without

View File

@ -1,4 +0,0 @@
When submitting a pull request, please modify the files in dev directory rather than those generated on-the-fly.
You may find all code contained in osync.sh in n_osync.sh and ofunctions.sh
You may run your modified code by using `merge.sh osync` in order to generate ../osync.sh

227
README.md
View File

@ -1,26 +1,18 @@
# osync # osync [![Build Status](https://travis-ci.org/deajan/osync.svg?branch=master)](https://travis-ci.org/deajan/osync) [![License](https://img.shields.io/badge/License-BSD%203--Clause-blue.svg)](https://opensource.org/licenses/BSD-3-Clause) [![GitHub Release](https://img.shields.io/github/release/deajan/osync.svg?label=Latest)](https://github.com/deajan/osync/releases/latest)
[![License](https://img.shields.io/badge/License-BSD%203--Clause-blue.svg)](https://opensource.org/licenses/BSD-3-Clause)
[![GitHub Release](https://img.shields.io/github/release/deajan/osync.svg?label=Latest)](https://github.com/deajan/osync/releases/latest)
[![Percentage of issues still open](http://isitmaintained.com/badge/open/deajan/osync.svg)](http://isitmaintained.com/project/deajan/osync "Percentage of issues still open")
[![Codacy Badge](https://api.codacy.com/project/badge/Grade/651acb2fd64642eb91078ba523b7f887)](https://www.codacy.com/app/ozy/osync?utm_source=github.com&utm_medium=referral&utm_content=deajan/osync&utm_campaign=Badge_Grade)
[![linux tests](https://github.com/deajan/osync/actions/workflows/linux.yml/badge.svg)](https://github.com/deajan/osync/actions/workflows/linux.yml/badge.svg)
[![windows tests](https://github.com/deajan/osync/actions/workflows/windows.yml/badge.svg)](https://github.com/deajan/osync/actions/workflows/windows.yml/badge.svg)
[![macos tests](https://github.com/deajan/osync/actions/workflows/macos.yml/badge.svg)](https://github.com/deajan/osync/actions/workflows/macos.yml/badge.svg)
A two way filesync script running on bash Linux, BSD, Android, MacOSX, Cygwin, MSYS2, Win10 bash and virtually any system supporting bash).
A two way filesync script running on bash Linux, BSD, Android, MacOSX, Cygwin, MSYS2, Win10 bash and virtually any system supporting bash. File synchronization is bidirectional, and can be run manually, as scheduled task, or triggered on file changes in daemon mode.
File synchronization is bidirectional, and can be run manually, as scheduled task, or triggered on file changes in monitor mode.
It is a command line tool rsync wrapper with a lot of additional features baked in. It is a command line tool rsync wrapper with a lot of additional features baked in.
This is a quickstart guide, you can find the full documentation on the [author's site](http://www.netpower.fr/osync). This is a quickstart guide, you can find the full documentation on the author's site.
## About About
-----
osync provides the following capabilities: osync provides the following capabilities
- Local-Local and Local-Remote sync - Local-Local and Local-Remote sync
- Fault tolerance with resume scenarios - Fault tolerance with resume scenarios
- POSIX ACL and extended attributes synchronization - File ACL and extended attributes synchronization
- Full script Time control - Full script Time control
- Soft deletions and multiple backups handling - Soft deletions and multiple backups handling
- Before / after run command execution - Before / after run command execution
@ -38,34 +30,23 @@ osync uses pidlocks to prevent multiple concurrent sync processes on/to the same
You may launch concurrent sync processes on the same system but as long as the replicas to synchronize are different. You may launch concurrent sync processes on the same system but as long as the replicas to synchronize are different.
Multiple osync tasks may be launched sequentially by osync osync-batch tool. Multiple osync tasks may be launched sequentially by osync osync-batch tool.
## Tested platforms Currently, it has been tested on CentOS 5.x, 6.x, 7.x, Fedora 22-25, Debian 6-8, Linux Mint 14-18, Ubuntu 12.04-12.10, FreeBSD 8.3-11, Mac OS X and pfSense 2.3.x.
Microsoft Windows is supported via MSYS or Cygwin and now via Windows 10 bash.
| Operating system | Version | Android support works via Termux.
|------------------|------------------------| Some users also have successfully used osync on Gentoo and created an openRC init scriptt for it.
| AlmaLinux | 9 |
| Android\* | Not known |
| CentOS | 5.x, 6.x, 7.x |
| Fedora | 22-25 |
| FreeBSD | 8.3-11 |
| Debian | 6-11 |
| Linux Mint | 14-18 |
| macOS | Not known |
| pfSense | 2.3.x |
| QTS (x86) | 4.5.1 |
| Ubuntu | 12.04-22.04 |
| Windows\*\* | 10 |
\* via Termux.
\*\* via MSYS, Cygwin and WSL.
Some users also have successfully used osync on Gentoo and created an OpenRC init script for it.
## Installation
Installation
------------
osync has been designed to not delete any data, but rather make backups of conflictual files or soft deletes. osync has been designed to not delete any data, but rather make backups of conflictual files or soft deletes.
Nevertheless, you should always have a neat backup of your data before trying a new sync tool. Nevertheless, you should always have a neat backup of your data before trying a new sync tool.
Getting osync via GitHub (remove the -b "stable" if you want latest dev snapshot) You may get osync on the author's site (stable version) or on github (stable or latest dev snapshot)
Getting osync via author's site on **http://www.netpower.fr/osync**
$ wget http://www.netpower.fr/projects/osync/osync.v1.2.tar.gz
$ tar xvf osync.v1.2.tar.gz
Getting osync via github (remove the -b "stable" if you want latest dev snapshot)
$ git clone -b "stable" https://github.com/deajan/osync $ git clone -b "stable" https://github.com/deajan/osync
$ cd osync $ cd osync
@ -73,10 +54,10 @@ Getting osync via GitHub (remove the -b "stable" if you want latest dev snapshot
Installer script accepts some parameters for automation. Launch install.sh --help for options. Installer script accepts some parameters for automation. Launch install.sh --help for options.
There is also an RPM file that should fit RHEL/CentOS/Fedora and basically any RPM based distro, see the GitHub release. There is also a RPM file that should fit RHEL/CentOS/Fedora and basically any RPM based distro, see the github release.
Please note that RPM files will install osync to `/usr/bin` instead of `/usr/local/bin` in order to enforce good practices. Please note that RPM files will install osync to /usr/bin instead of /usr/local/bin in order to enforce good practices.
osync will install itself to `/usr/local/bin` and an example configuration file will be installed to `/etc/osync`. osync will install itself to /usr/local/bin and an example configuration file will be installed to /etc/osync
osync needs to run with bash shell. Using any other shell will most probably result in errors. osync needs to run with bash shell. Using any other shell will most probably result in errors.
If bash is not your default shell, you may invoke it using If bash is not your default shell, you may invoke it using
@ -87,90 +68,78 @@ On *BSD and BusyBox, be sure to have bash installed.
If you can't install osync, you may just copy osync.sh where you needed and run it from there. If you can't install osync, you may just copy osync.sh where you needed and run it from there.
Arch Linux packages are available at <https://aur.archlinux.org/packages/osync/> (thanks to Shadowigor, <https://github.com/shadowigor>). Archlinux packages are available at https://aur.archlinux.org/packages/osync/ (thanks to Shadowigor, https://github.com/shadowigor)
## Upgrade from previous configuration files ## Upgrade from previous configuration files
Since osync v1.1 the config file format has changed in semantics and adds new config options. Since osync v1.1 the config file format has changed in semantics and adds new config options.
Also, master is now called initiator and slave is now called target. Also, master is now called initiator and slave is now called target.
osync v1.3 also added multiple new configuration options. osync v1.2 also added multiple new configuration options.
You can upgrade all v1.0x-v1.3-dev config files by running the upgrade script You can upgrade all v1.0x-v1.2-dev config files by running the upgrade script
$ ./upgrade-v1.0x-v1.3x.sh /etc/osync/your-config-file.conf $ ./upgrade-v1.0x-v1.2x.sh /etc/osync/your-config-file.conf
The script will backup your config file, update it's content and try to connect to initiator and target replicas to update the state dir. The script will backup your config file, update it's content and try to connect to initiator and target replicas to update the state dir.
## Usage Usage
-----
Osync can work in 3 modes: Osync can work with in three flavors: Quick sync mode, configuration file mode, and daemon mode.
1. [:rocket: Quick sync mode](#quick-sync-mode) While quick sync mode is convenient to do fast syncs between some directories, a configuration file gives much more functionnality.
2. [:gear: Configuration file mode](#configuration-file-mode) Please use double quotes as path delimiters. Do not use escaped characters in path names.
3. [:mag_right: Monitor mode](#monitor-mode)
> [!NOTE]
> Please use double quotes as path delimiters. Do not use escaped characters in path names.
### <a id="quick-sync-mode"></a>:rocket: Quick sync mode
Quick sync mode is convenient to do fast syncs between some directories. However, the [configuration file mode](#configuration-file-mode) gives much more functionality.
QuickSync example
-----------------
# osync.sh --initiator="/path/to/dir1" --target="/path/to/remote dir2" # osync.sh --initiator="/path/to/dir1" --target="/path/to/remote dir2"
# osync.sh --initiator="/path/to/another dir" --target="ssh://user@host.com:22//path/to/dir2" --rsakey=/home/user/.ssh/id_rsa_private_key_example.com # osync.sh --initiator="/path/to/another dir" --target="ssh://user@host.com:22//path/to/dir2" --rsakey=/home/user/.ssh/id_rsa_private_key_example.com
#### Quick sync with minimal options Summary mode
------------
osync may output only file changes and errors with the following
# osync.sh --initiator="/path/to/dir1" --target="/path/to/dir" --summary --errors-only --no-prefix
This also works in configuration file mode.
QuickSync with minimal options
------------------------------
In order to run osync the quickest (without transferring file attributes, without softdeletion, without prior space checks and without remote connectivity checks, you may use the following: In order to run osync the quickest (without transferring file attributes, without softdeletion, without prior space checks and without remote connectivity checks, you may use the following:
# MINIMUM_SPACE=0 PRESERVE_ACL=no PRESERVE_XATTR=no SOFT_DELETE_DAYS=0 CONFLICT_BACKUP_DAYS=0 REMOTE_HOST_PING=no osync.sh --initiator="/path/to/another dir" --target="ssh://user@host.com:22//path/to/dir2" --rsakey=/home/user/.ssh/id_rsa_private_key_example.com # MINIMUM_SPACE=0 PRESERVE_ACL=no PRESERVE_XATTR=no SOFT_DELETE_DAYS=0 CONFLICT_BACKUP_DAYS=0 REMOTE_HOST_PING=no osync.sh --initiator="/path/to/another dir" --target="ssh://user@host.com:22//path/to/dir2" --rsakey=/home/user/.ssh/id_rsa_private_key_example.com
All the settings described here may also be configured in the conf file. All the settings described here may also be configured in the conf file.
### Summary mode Running osync with a Configuration file
---------------------------------------
osync will output only file changes and errors with the following: You'll have to customize the sync.conf file according to your needs.
If you intend to sync a remote directory, osync will need a pair of private / public RSA keys to perform remote SSH connections.
# osync.sh --initiator="/path/to/dir1" --target="/path/to/dir" --summary --errors-only --no-prefix Also, running sync as superuser requires to configure /etc/sudoers file.
Please read the documentation about remote sync setups.
This also works in configuration file mode. Once you've customized a sync.conf file, you may run osync with the following test run:
### <a id="configuration-file-mode"></a>:gear: Configuration file mode
You'll have to customize the `sync.conf` file according to your needs.
If you intend to sync a remote directory, osync will need a pair of private/public RSA keys to perform remote SSH connections. Also, running sync as superuser requires to configure the `/etc/sudoers` file.
> [!TIP]
> Read the [example configuration file](https://github.com/deajan/osync/blob/master/sync.conf.example) for documentation about remote sync setups.
Once you've customized a `sync.conf` file, you may run osync with the following test run:
# osync.sh /path/to/your.conf --dry # osync.sh /path/to/your.conf --dry
If everything went well, you may run the actual configuration with: If everything went well, you may run the actual configuration with one of the following:
# osync.sh /path/to/your.conf # osync.sh /path/to/your.conf
To display which files and attrs are actually synchronized and which files are to be soft deleted / are in conflict, use `--verbose` (you may mix it with `--silent` to output verbose input only in the log files):
# osync.sh /path/to/your.conf --verbose # osync.sh /path/to/your.conf --verbose
Use `--no-maxtime` to disable execution time checks, which is usefull for big initial sync tasks that might take long time. Next runs should then only propagate changes and take much less time:
# osync.sh /path/to/your.conf --no-maxtime # osync.sh /path/to/your.conf --no-maxtime
Once you're confident about your first runs, you may add osync as a cron task like the following in `/etc/crontab` which would run osync every 30 minutes: Verbose option will display which files and attrs are actually synchronized and which files are to be soft deleted / are in conflict.
You may mix "--silent" and "--verbose" parameters to output verbose input only in the log files.
No-Maxtime option will disable execution time checks, which is usefull for big initial sync tasks that might take long time. Next runs should then only propagate changes and take much less time.
Once you're confident about your first runs, you may add osync as a cron task like the following in /etc/crontab which would run osync every 30 minutes:
*/30 * * * * root /usr/local/bin/osync.sh /etc/osync/my_sync.conf --silent */30 * * * * root /usr/local/bin/osync.sh /etc/osync/my_sync.conf --silent
Please note that this syntax works for RedHat/CentOS. On Debian you might want to remove the username (i.e. root) in order to make the crontab entry work. Please note that this syntax works for RedHat / CentOS. On Debian you might want to remove the username (ie root) in order to make the crontab entry work.
### Batch mode Batch mode
----------
You may want to sequentially run multiple sync sets between the same servers. In that case, `osync-batch.sh` is a nice tool that will run every osync conf file, and, if a task fails, You may want to sequentially run multiple sync sets between the same servers. In that case, osync-batch.sh is a nice tool that will run every osync conf file, and, if a task fails,
run it again if there's still some time left. run it again if there's still some time left.
The following example will run all .conf files found in /etc/osync, and retry 3 times every configuration that fails, if the whole sequential run took less than 2 hours.
To run all `.conf` files found in `/etc/osync`, and retry 3 times every configuration that fails if the whole sequential run took less than 2 hours, use:
# osync-batch.sh --path=/etc/osync --max-retries=3 --max-exec-time=7200 # osync-batch.sh --path=/etc/osync --max-retries=3 --max-exec-time=7200
@ -178,75 +147,71 @@ Having multiple conf files can then be run in a single cron command like
00 00 * * * root /usr/local/bin/osync-batch.sh --path=/etc/osync --silent 00 00 * * * root /usr/local/bin/osync-batch.sh --path=/etc/osync --silent
### <a id="monitor-mode"></a>:mag_right: Monitor mode Daemon mode
-----------
> [!NOTE] Additionaly, you may run osync in monitor mode, which means it will perform a sync upon file operations on initiator replica.
> Monitoring changes requires inotifywait command (`inotify-tools` package for most Linux distributions). BSD, macOS and Windows are not yet supported for this operation mode, unless you find an `inotify-tool` package on these OSes. This can be a drawback on functionnality versus scheduled mode because this mode only launches a sync task if there are file modifications on the initiator replica, without being able to monitor the target replica.
Target replica changes are only synced when initiator replica changes occur, or when a given amount of time (default 600 seconds) passed without any changes on initiator replica.
Monitor mode will perform a sync upon file operations on initiator replica. This can be a drawback on functionality versus scheduled mode because this mode only launches a sync task if there are file modifications on the initiator replica, without being able to monitor the target replica. Target replica changes are only synced when initiator replica changes occur, or when a given amount of time (600 seconds by default) passed without any changes on initiator replica. File monitor mode can also be launched as a daemon with an init script. Please read the documentation for more info.
Note that monitoring changes requires inotifywait command (inotify-tools package for most Linux distributions).
This mode can also be launched as a daemon with an init script. Please read the documentation for more info. BSD, MacOS X and Windows are not yet supported for this operation mode, unless you find a inotify-tools package on these OSes.
To use this mode, use `--on-changes`:
# osync.sh /etc/osync/my_sync.conf --on-changes # osync.sh /etc/osync/my_sync.conf --on-changes
To run this mode as a system service with the `osync-srv` script, you can run the `install.sh` script (which should work in most cases) or copy the files by hand: Osync file monitor mode may be run as system service with the osync-srv script.
- `osync.sh` to `/usr/bin/local` You may run the install.sh script which should work in most cases or copy the files by hand (osync.sh to /usr/bin/local, sync.conf to /etc/osync, osync-srv to /etc/init.d for initV, osync-srv@.service to /usr/lib/systemd/system for systemd, osync-srv-openrc to /etc/init.d/osync-srv-openrc for OpenRC).
- `sync.conf` to `/etc/osync`
- For InitV, `osync-srv` to `/etc/init.d`
- For systemd, `osync-srv@.service` to `/usr/lib/systemd/system`
- For OpenRC, `osync-srv-openrc` to `/etc/init.d/osync-srv-openrc`
For InitV (any configuration file found in `/etc/osync` will create an osync daemon instance when service is launched on initV): InitV specific instructions:
Any configuration file found in /etc/osync will create a osync daemon instance when service is launched on initV with:
$ service osync-srv start $ service osync-srv start
$ chkconfig osync-srv on $ chkconfig osync-srv on
For systemd, launch service (one service per config file to launch) with: Systemd specific (one service per config file)
Launch service (one service per config file to launch) with:
$ systemctl start osync-srv@configfile.conf $ systemctl start osync-srv@configfile.conf
$ systemctl enable osync-srv@configfile.conf $ systemctl enable osync-srv@configfile.conf
For OpenRC (user contrib), launch service (one service per config file to launch) with: OpenRC specific instructions (user contrib)
Launch service (one service per config file to launch) with:
$ rc-update add osync-srv.configfile default $ rc-update add osync-srv.configfile default
## Security enhancements Security enhancements
---------------------
Remote SSH connection security can be improved by limiting what hostnames may connect, disabling some SSH options and using ssh filter. Remote SSH connection security can be improved by limiting what hostnames may connect, disabling some SSH options and using ssh filter.
Please read full documentation in order to configure ssh filter. Please read full documentation in order to configure ssh filter.
## Contributions Contributions
-------------
All kind of contribs are welcome. All kind of contribs are welcome.
When submitting a PR, please be sure to modify files in dev directory (`dev/n_osync.sh`, `dev/ofunctions.sh`, `dev/common_install.sh etc`) as most of the main files are generated via merge.sh. When submitting a PR, please be sure to modify files in dev directory (dev/n_osync.sh, dev/ofunctions.sh, dev/common_install.sh etc) as most of the main files are generated via merge.sh.
When testing your contribs, generate files via merge.sh or use bootstrap.sh which generates a temporary version of n_osync.sh with all includes. When testing your contribs, generate files via merge.sh or use bootstrap.sh which generates a temporary version of n_osync.sh with all includes.
Unit tests are run by travis on every PR, but you may also run them manually which adds some tests that travis can't do, via `dev/tests/run_tests.sh`. Unit tests are run by travis on every PR, but you may also run them manually which adds some tests that travis can't do, via dev/tests/run_tests.sh
SSH port can be changed on the fly via environment variable SSH_PORT, e.g.: SSH port can be changed on the fly via environment variable SSH_PORT, eg: SSH_PORT=2222 dev/tests/run_tests.sh
# SSH_PORT=2222 dev/tests/run_tests.sh
Consider reading CODING_CONVENTIONS.TXT before submitting a patch. Consider reading CODING_CONVENTIONS.TXT before submitting a patch.
## Troubleshooting Troubleshooting
---------------
You may find osync's logs in `/var/log/osync.[INSTANCE_ID].log` (or current directory if `/var/log` is not writable). You may find osync's logs in /var/log/osync.[INSTANCE_ID].log (or current directory if /var/log is not writable).
Additionnaly, you can use the --verbose flag see to what actions are going on. Additionnaly, you can use the --verbose flag see to what actions are going on.
When opening an issue, please post the corresponding log files. Also, you may run osync with _DEBUG option in order to have more precise logs, e.g.: When opening an issue, please post the corresponding log files. Also, you may run osync with _DEBUG option in order to have more precise logs, eg:
_DEBUG=yes ./osync.sh /path/to/conf
# _DEBUG=yes ./osync.sh /path/to/conf
## Uninstalling
Uninstalling
------------
The installer script also has an uninstall mode that will keep configuration files. Use it with The installer script also has an uninstall mode that will keep configuration files. Use it with
$ ./install.sh --remove $ ./install.sh --remove
## Author Author
------
Feel free to open an issue on GitHub or mail me for support in my spare time :) Feel free to open an issue on github or mail me for support in my spare time :)
Orsiris de Jong | ozy@netpower.fr Orsiris de Jong | ozy@netpower.fr

View File

@ -1,4 +1,4 @@
Coding style used for my bash projects (v3.2 Oct 2018) Coding style used for my bash projects (v3.0 Dec 2016)
As bash is clearly an error prone script language, we'll use as much standard coding as possible, including some quick and dirty debug techniques described here. As bash is clearly an error prone script language, we'll use as much standard coding as possible, including some quick and dirty debug techniques described here.
++++++ Header ++++++ Header
@ -162,15 +162,6 @@ if [ $retval -ne 0 ]; then
Logger "Some error message" "ERROR" $retval Logger "Some error message" "ERROR" $retval
fi fi
++++++ includes
Using merge.sh, the program may have includes like
include #### RemoteLogger SUBSET ####
All possible includes are listed in ofunctions.sh
Mostly, includes are needed to port functions to a remote shell without writing them again.
++++++ Remote execution
Remote commands should always invoke bash (using '"'"' to escape single quotes of 'bash -c "command"'). It is preferable to use ssh heredoc in order to use plain code. Remote commands should always invoke bash (using '"'"' to escape single quotes of 'bash -c "command"'). It is preferable to use ssh heredoc in order to use plain code.
If local and remote code is identical, wrap remote code in a function so only minor modifications are needed. If local and remote code is identical, wrap remote code in a function so only minor modifications are needed.
Remote code return code is transmitted via exit. Remote code return code is transmitted via exit.
@ -193,9 +184,6 @@ if [ $retval -ne 0 ]; then
Logger "Some error message" "ERROR" $retval Logger "Some error message" "ERROR" $retval
fi fi
We also need to transmit a couple of environment variables (RUN_DIR; PROGRAM; _LOGGER_VERBOSE... see current setups) in order to make standard code.
Include works here too.
++++++ File variables ++++++ File variables
All eval cmd should exit their content to a file called "$RUNDIR/$PROGRAM.${FUNCNAME[0]}.$SCRIPT_PID" All eval cmd should exit their content to a file called "$RUNDIR/$PROGRAM.${FUNCNAME[0]}.$SCRIPT_PID"
@ -209,6 +197,15 @@ Quoting happens outside the function call.
echo "$(myStringFunction $myStringVar)" echo "$(myStringFunction $myStringVar)"
++++++ Finding code errors
Use shellcheck.net now and then (ignore SC2086 in our case)
Use a low tech approach to find uneven number of quotes per line
tr -cd "'\n" < my_bash_file.sh | awk 'length%2==1 {print NR, $0}'
tr -cd "\"\n" < my_bash_file.sh | awk 'length%2==1 {print NR, $0}'
++++++ ofunctions ++++++ ofunctions
As obackup and osync share alot of common functions, ofunctions.sh will host all shared code. As obackup and osync share alot of common functions, ofunctions.sh will host all shared code.
@ -261,16 +258,3 @@ When launching the program with 'bash -x', add SLEEP_TIME=1 so wait functions wo
Ex: Ex:
SLEEP_TIME=1 bash -x ./program.sh SLEEP_TIME=1 bash -x ./program.sh
++++++ Finding code errors
Before every release, shellcheck must be run
Also a grep -Eri "TODO|WIP" osync/* must be run in order to find potential release blockers
Use shellcheck.net now and then (ignore SC2086 in our case)
Use a low tech approach to find uneven number of quotes per line
tr -cd "'\n" < my_bash_file.sh | awk 'length%2==1 {print NR, $0}'
tr -cd "\"\n" < my_bash_file.sh | awk 'length%2==1 {print NR, $0}'

View File

@ -1,17 +0,0 @@
## Releases require the following
- Documentation must be up to date
- grep -Eri "TODO|WIP" osync/* must be run in order to find potential release blockers, including in unit tests and config files
Run program and then use declare -p to find any leaked variables that should not exist outside of the program
- packaging files must be updated (RHEL / Arch)
- Before every release, shellcheck must be run
- ./shellcheck.sh -e SC2034 -e SC2068 ofunctions.sh
- ./shellcheck.sh n_osync.sh
- ./shellcheck.sh ../install.sh
- ./shellcheck.sh ../osync-batch.sh
- ./shellcheck.sh ../ssh_filter.sh
- Unexpansion of main and subprograms must be done
- Arch repo must be updated
- Source must be put to download on www.netpower.fr/osync
- Tests must be run against all supported operating systems via run_tests.sh

View File

@ -1,6 +1,6 @@
#!/usr/bin/env bash #!/usr/bin/env bash
## dev pre-processor bootstrap rev 2019052001 ## dev pre-processor bootstrap rev 2018062501
## Yeah !!! A really tech sounding name... In fact it's just include emulation in bash ## Yeah !!! A really tech sounding name... In fact it's just include emulation in bash
function Usage { function Usage {
@ -8,7 +8,7 @@ function Usage {
echo "Creates and executes $0.tmp.sh" echo "Creates and executes $0.tmp.sh"
echo "Usage:" echo "Usage:"
echo "" echo ""
echo "$0 --program=osync|obackup|pmocr [options to pass to program]" echo "$0 --program=osync|osync_target_helper|obackup|pmocr [options to pass to program]"
echo "Can also be run with BASHVERBOSE=yes environment variable in order to prefix program with bash -x" echo "Can also be run with BASHVERBOSE=yes environment variable in order to prefix program with bash -x"
} }
@ -19,16 +19,16 @@ if [ ! -f "./merge.sh" ]; then
fi fi
bootstrapProgram="" bootstrapProgram=""
opts=() opts=""
outputFileName="$0" outputFileName="$0"
for i in "${@}"; do for i in "$@"; do
case "$i" in case $i in
--program=*) --program=*)
bootstrapProgram="${i##*=}" bootstrapProgram="${i##*=}"
;; ;;
*) *)
opts+=("$i") opts=$opts" $i"
;; ;;
esac esac
done done
@ -44,7 +44,7 @@ else
__PREPROCESSOR_Constants __PREPROCESSOR_Constants
if [ ! -f "$__PREPROCESSOR_PROGRAM_EXEC" ]; then if [ ! -f "$__PREPROCESSOR_PROGRAM_EXEC" ]; then
echo "Cannot find file $__PREPROCESSOR_PROGRAM executable [n_$bootstrapProgram.sh]." echo "Cannot find file [n_$bootstrapProgram.sh]."
exit 1 exit 1
fi fi
fi fi
@ -69,7 +69,7 @@ if type termux-fix-shebang > /dev/null 2>&1; then
fi fi
if [ "$BASHVERBOSE" == "yes" ]; then if [ "$BASHVERBOSE" == "yes" ]; then
bash -x "$outputFileName.tmp.sh" "${opts[@]}" bash -x "$outputFileName.tmp.sh" $opts
else else
"$outputFileName.tmp.sh" "${opts[@]}" "$outputFileName.tmp.sh" $opts
fi fi

View File

@ -1,9 +1,9 @@
#!/usr/bin/env bash #!/usr/bin/env bash
SUBPROGRAM=[prgname] SUBPROGRAM=[prgname]
PROGRAM="$SUBPROGRAM-batch" # Batch program to run osync / obackup instances sequentially and rerun failed ones PROGRAM="$SUBPROGRAM-batch" # Batch program to run osync / obackup instances sequentially and rerun failed ones
AUTHOR="(L) 2013-2020 by Orsiris de Jong" AUTHOR="(L) 2013-2017 by Orsiris de Jong"
CONTACT="http://www.netpower.fr - ozy@netpower.fr" CONTACT="http://www.netpower.fr - ozy@netpower.fr"
PROGRAM_BUILD=2020031502 PROGRAM_BUILD=2016120401
## Runs an osync /obackup instance for every conf file found ## Runs an osync /obackup instance for every conf file found
## If an instance fails, run it again if time permits ## If an instance fails, run it again if time permits
@ -26,19 +26,36 @@ else
LOG_FILE=./$SUBPROGRAM-batch.log LOG_FILE=./$SUBPROGRAM-batch.log
fi fi
## Default directory where to store temporary run files
if [ -w /tmp ]; then
RUN_DIR=/tmp
elif [ -w /var/tmp ]; then
RUN_DIR=/var/tmp
else
RUN_DIR=.
fi
# No need to edit under this line ############################################################## # No need to edit under this line ##############################################################
include #### Logger SUBSET #### function _logger {
include #### CleanUp SUBSET #### local value="${1}" # What to log
include #### GenericTrapQuit SUBSET #### echo -e "$value" >> "$LOG_FILE"
}
function Logger {
local value="${1}" # What to log
local level="${2}" # Log level: DEBUG, NOTICE, WARN, ERROR, CRITIAL
prefix="$(date) - "
if [ "$level" == "CRITICAL" ]; then
_logger "$prefix\e[41m$value\e[0m"
elif [ "$level" == "ERROR" ]; then
_logger "$prefix\e[91m$value\e[0m"
elif [ "$level" == "WARN" ]; then
_logger "$prefix\e[93m$value\e[0m"
elif [ "$level" == "NOTICE" ]; then
_logger "$prefix$value"
elif [ "$level" == "DEBUG" ]; then
if [ "$DEBUG" == "yes" ]; then
_logger "$prefix$value"
fi
else
_logger "\e[41mLogger function called without proper loglevel.\e[0m"
_logger "$prefix$value"
fi
}
function CheckEnvironment { function CheckEnvironment {
## osync / obackup executable full path can be set here if it cannot be found on the system ## osync / obackup executable full path can be set here if it cannot be found on the system
@ -128,8 +145,6 @@ function Usage {
exit 128 exit 128
} }
trap GenericTrapQuit TERM EXIT HUP QUIT
opts="" opts=""
for i in "$@" for i in "$@"
do do

288
dev/common_install.sh Normal file → Executable file
View File

@ -2,6 +2,8 @@
## Installer script suitable for osync / obackup / pmocr ## Installer script suitable for osync / obackup / pmocr
include #### _OFUNCTIONS_BOOTSTRAP SUBSET ####
PROGRAM=[prgname] PROGRAM=[prgname]
PROGRAM_VERSION=$(grep "PROGRAM_VERSION=" $PROGRAM.sh) PROGRAM_VERSION=$(grep "PROGRAM_VERSION=" $PROGRAM.sh)
@ -10,15 +12,12 @@ PROGRAM_BINARY=$PROGRAM".sh"
PROGRAM_BATCH=$PROGRAM"-batch.sh" PROGRAM_BATCH=$PROGRAM"-batch.sh"
SSH_FILTER="ssh_filter.sh" SSH_FILTER="ssh_filter.sh"
SCRIPT_BUILD=2025012001 SCRIPT_BUILD=2018070201
INSTANCE_ID="installer-$SCRIPT_BUILD"
## osync / obackup / pmocr / zsnap install script ## osync / obackup / pmocr / zsnap install script
## Tested on RHEL / CentOS 6 & 7, Fedora 23, Debian 7 & 8, Mint 17 and FreeBSD 8, 10 and 11 ## Tested on RHEL / CentOS 6 & 7, Fedora 23, Debian 7 & 8, Mint 17 and FreeBSD 8, 10 and 11
## Please adapt this to fit your distro needs ## Please adapt this to fit your distro needs
include #### OFUNCTIONS MICRO SUBSET ####
# Get current install.sh path from http://stackoverflow.com/a/246128/2635443 # Get current install.sh path from http://stackoverflow.com/a/246128/2635443
SCRIPT_PATH="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" SCRIPT_PATH="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
@ -27,6 +26,56 @@ _STATS=1
ACTION="install" ACTION="install"
FAKEROOT="" FAKEROOT=""
function GetCommandlineArguments {
for i in "$@"; do
case $i in
--prefix=*)
FAKEROOT="${i##*=}"
;;
--silent)
_LOGGER_SILENT=true
;;
--no-stats)
_STATS=0
;;
--remove)
ACTION="uninstall"
;;
--help|-h|-?)
Usage
;;
*)
Logger "Unknown option '$i'" "SIMPLE"
Usage
exit
;;
esac
done
}
GetCommandlineArguments "$@"
CONF_DIR=$FAKEROOT/etc/$PROGRAM
BIN_DIR="$FAKEROOT/usr/local/bin"
SERVICE_DIR_INIT=$FAKEROOT/etc/init.d
# Should be /usr/lib/systemd/system, but /lib/systemd/system exists on debian & rhel / fedora
SERVICE_DIR_SYSTEMD_SYSTEM=$FAKEROOT/lib/systemd/system
SERVICE_DIR_SYSTEMD_USER=$FAKEROOT/etc/systemd/user
SERVICE_DIR_OPENRC=$FAKEROOT/etc/init.d
if [ "$PROGRAM" == "osync" ]; then
SERVICE_NAME="osync-srv"
elif [ "$PROGRAM" == "pmocr" ]; then
SERVICE_NAME="pmocr-srv"
fi
SERVICE_FILE_INIT="$SERVICE_NAME"
SERVICE_FILE_SYSTEMD_SYSTEM="$SERVICE_NAME@.service"
SERVICE_FILE_SYSTEMD_USER="$SERVICE_NAME@.service.user"
SERVICE_FILE_OPENRC="$SERVICE_NAME-openrc"
## Generic code
## Default log file ## Default log file
if [ -w "$FAKEROOT/var/log" ]; then if [ -w "$FAKEROOT/var/log" ]; then
LOG_FILE="$FAKEROOT/var/log/$PROGRAM-install.log" LOG_FILE="$FAKEROOT/var/log/$PROGRAM-install.log"
@ -36,15 +85,13 @@ else
LOG_FILE="./$PROGRAM-install.log" LOG_FILE="./$PROGRAM-install.log"
fi fi
include #### Logger SUBSET ####
include #### UrlEncode SUBSET #### include #### UrlEncode SUBSET ####
include #### GetLocalOS SUBSET #### include #### GetLocalOS SUBSET ####
include #### GetConfFileValue SUBSET #### include #### GetConfFileValue SUBSET ####
include #### CleanUp SUBSET ####
include #### GenericTrapQuit SUBSET ####
function SetLocalOSSettings { function SetLocalOSSettings {
USER=root USER=root
DO_INIT=true
# LOCAL_OS and LOCAL_OS_FULL are global variables set at GetLocalOS # LOCAL_OS and LOCAL_OS_FULL are global variables set at GetLocalOS
@ -54,12 +101,10 @@ function SetLocalOSSettings {
;; ;;
*"MacOSX"*) *"MacOSX"*)
GROUP=admin GROUP=admin
DO_INIT=false
;; ;;
*"Cygwin"*|*"Android"*|*"msys"*|*"BusyBox"*) *"msys"*|*"Cygwin"*)
USER="" USER=""
GROUP="" GROUP=""
DO_INIT=false
;; ;;
*) *)
GROUP=root GROUP=root
@ -67,12 +112,12 @@ function SetLocalOSSettings {
esac esac
if [ "$LOCAL_OS" == "Android" ] || [ "$LOCAL_OS" == "BusyBox" ]; then if [ "$LOCAL_OS" == "Android" ] || [ "$LOCAL_OS" == "BusyBox" ]; then
Logger "Cannot be installed on [$LOCAL_OS]. Please use $PROGRAM.sh directly." "CRITICAL" Logger "Cannot be installed on [$LOCAL_OS]. Please use $PROGRAM.sh directly." "SIMPLE"
exit 1 exit 1
fi fi
if ([ "$USER" != "" ] && [ "$(whoami)" != "$USER" ] && [ "$FAKEROOT" == "" ]); then if ([ "$USER" != "" ] && [ "$(whoami)" != "$USER" ] && [ "$FAKEROOT" == "" ]); then
Logger "Must be run as $USER." "CRITICAL" Logger "Must be run as $USER." "SIMPLE"
exit 1 exit 1
fi fi
@ -80,68 +125,35 @@ function SetLocalOSSettings {
} }
function GetInit { function GetInit {
init="none"
if [ -f /sbin/openrc-run ]; then if [ -f /sbin/openrc-run ]; then
init="openrc" init="openrc"
Logger "Detected openrc." "NOTICE" Logger "Detected openrc." "SIMPLE"
elif [ -f /usr/lib/systemd/systemd ]; then
init="systemd"
Logger "Detected systemd." "NOTICE"
elif [ -f /sbin/init ]; then elif [ -f /sbin/init ]; then
if type -p file > /dev/null 2>&1; then
if file /sbin/init | grep systemd > /dev/null; then if file /sbin/init | grep systemd > /dev/null; then
init="systemd" init="systemd"
Logger "Detected systemd." "NOTICE" Logger "Detected systemd." "SIMPLE"
else else
init="initV" init="initV"
Logger "Detected initV." "SIMPLE"
fi fi
else else
init="initV" Logger "Can't detect initV, systemd or openRC. Service files won't be installed. You can still run $PROGRAM manually or via cron." "SIMPLE"
fi
if [ $init == "initV" ]; then
Logger "Detected initV." "NOTICE"
fi
else
Logger "Can't detect initV, systemd or openRC. Service files won't be installed. You can still run $PROGRAM manually or via cron." "WARN"
init="none" init="none"
fi fi
} }
function CreateDir { function CreateDir {
local dir="${1}" local dir="${1}"
local dirMask="${2}"
local dirUser="${3}"
local dirGroup="${4}"
if [ ! -d "$dir" ]; then if [ ! -d "$dir" ]; then
(
if [ $(IsInteger $dirMask) -eq 1 ]; then
umask $dirMask
fi
mkdir -p "$dir" mkdir -p "$dir"
)
if [ $? == 0 ]; then if [ $? == 0 ]; then
Logger "Created directory [$dir]." "NOTICE" Logger "Created directory [$dir]." "SIMPLE"
else else
Logger "Cannot create directory [$dir]." "CRITICAL" Logger "Cannot create directory [$dir]." "SIMPLE"
exit 1 exit 1
fi fi
fi fi
if [ "$dirUser" != "" ]; then
userGroup="$dirUser"
if [ "$dirGroup" != "" ]; then
userGroup="$userGroup"":$dirGroup"
fi
chown "$userGroup" "$dir"
if [ $? != 0 ]; then
Logger "Could not set directory ownership on [$dir] to [$userGroup]." "CRITICAL"
exit 1
else
Logger "Set file ownership on [$dir] to [$userGroup]." "NOTICE"
fi
fi
} }
function CopyFile { function CopyFile {
@ -155,33 +167,32 @@ function CopyFile {
local overwrite="${8:-false}" local overwrite="${8:-false}"
local userGroup="" local userGroup=""
local oldFileName
if [ "$destFileName" == "" ]; then if [ "$destFileName" == "" ]; then
destFileName="$sourceFileName" destFileName="$sourceFileName"
fi fi
if [ -f "$destPath/$destFileName" ] && [ $overwrite == false ]; then if [ -f "$destPath/$destFileName" ] && [ $overwrite == false ]; then
destFileName="$sourceFileName.new" destfileName="$sourceFileName.new"
Logger "Copying [$sourceFileName] to [$destPath/$destFileName]." "NOTICE" Logger "Copying [$sourceFileName] to [$destPath/$destFilename]." "SIMPLE"
fi fi
cp "$sourcePath/$sourceFileName" "$destPath/$destFileName" cp "$sourcePath/$sourceFileName" "$destPath/$destFileName"
if [ $? != 0 ]; then if [ $? != 0 ]; then
Logger "Cannot copy [$sourcePath/$sourceFileName] to [$destPath/$destFileName]. Make sure to run install script in the directory containing all other files." "CRITICAL" Logger "Cannot copy [$sourcePath/$sourceFileName] to [$destPath/$destFileName]. Make sure to run install script in the directory containing all other files." "SIMPLE"
Logger "Also make sure you have permissions to write to [$BIN_DIR]." "ERROR" Logger "Also make sure you have permissions to write to [$BIN_DIR]." "SIMPLE"
exit 1 exit 1
else else
Logger "Copied [$sourcePath/$sourceFileName] to [$destPath/$destFileName]." "NOTICE" Logger "Copied [$sourcePath/$sourceFileName] to [$destPath/$destFileName]." "SIMPLE"
if [ "$(IsInteger $fileMod)" -eq 1 ]; then if [ "$fileMod" != "" ]; then
chmod "$fileMod" "$destPath/$destFileName" chmod "$fileMod" "$destPath/$destFileName"
if [ $? != 0 ]; then if [ $? != 0 ]; then
Logger "Cannot set file permissions of [$destPath/$destFileName] to [$fileMod]." "CRITICAL" Logger "Cannot set file permissions of [$destPath/$destFileName] to [$fileMod]." "SIMPLE"
exit 1 exit 1
else else
Logger "Set file permissions to [$fileMod] on [$destPath/$destFileName]." "NOTICE" Logger "Set file permissions to [$fileMod] on [$destPath/$destFileName]." "SIMPLE"
fi fi
elif [ "$fileMod" != "" ]; then
Logger "Bogus filemod [$fileMod] for [$destPath] given." "WARN"
fi fi
if [ "$fileUser" != "" ]; then if [ "$fileUser" != "" ]; then
@ -193,10 +204,10 @@ function CopyFile {
chown "$userGroup" "$destPath/$destFileName" chown "$userGroup" "$destPath/$destFileName"
if [ $? != 0 ]; then if [ $? != 0 ]; then
Logger "Could not set file ownership on [$destPath/$destFileName] to [$userGroup]." "CRITICAL" Logger "Could not set file ownership on [$destPath/$destFileName] to [$userGroup]." "SIMPLE"
exit 1 exit 1
else else
Logger "Set file ownership on [$destPath/$destFileName] to [$userGroup]." "NOTICE" Logger "Set file ownership on [$destPath/$destFileName] to [$userGroup]." "SIMPLE"
fi fi
fi fi
fi fi
@ -248,60 +259,44 @@ function CopyServiceFiles {
CreateDir "$SERVICE_DIR_SYSTEMD_USER" CreateDir "$SERVICE_DIR_SYSTEMD_USER"
CopyFile "$SCRIPT_PATH" "$SERVICE_DIR_SYSTEMD_USER" "$SERVICE_FILE_SYSTEMD_USER" "$SERVICE_FILE_SYSTEMD_USER" "" "" "" true CopyFile "$SCRIPT_PATH" "$SERVICE_DIR_SYSTEMD_USER" "$SERVICE_FILE_SYSTEMD_USER" "$SERVICE_FILE_SYSTEMD_USER" "" "" "" true
fi fi
Logger "Created [$SERVICE_NAME] service in [$SERVICE_DIR_SYSTEMD_SYSTEM] and [$SERVICE_DIR_SYSTEMD_USER]." "SIMPLE"
if [ -f "$SCRIPT_PATH/$TARGET_HELPER_SERVICE_FILE_SYSTEMD_SYSTEM" ]; then Logger "Can be activated with [systemctl start SERVICE_NAME@instance.conf] where instance.conf is the name of the config file in $CONF_DIR." "SIMPLE"
CopyFile "$SCRIPT_PATH" "$SERVICE_DIR_SYSTEMD_SYSTEM" "$TARGET_HELPER_SERVICE_FILE_SYSTEMD_SYSTEM" "$TARGET_HELPER_SERVICE_FILE_SYSTEMD_SYSTEM" "" "" "" true Logger "Can be enabled on boot with [systemctl enable $SERVICE_NAME@instance.conf]." "SIMPLE"
Logger "Created optional service [$TARGET_HELPER_SERVICE_NAME] with same specifications as below." "NOTICE" Logger "In userland, active with [systemctl --user start $SERVICE_NAME@instance.conf]." "SIMPLE"
fi
if [ -f "$SCRIPT_PATH/$TARGET_HELPER_SERVICE_FILE_SYSTEMD_USER" ]; then
CopyFile "$SCRIPT_PATH" "$SERVICE_DIR_SYSTEMD_USER" "$TARGET_HELPER_SERVICE_FILE_SYSTEMD_USER" "$TARGET_HELPER_SERVICE_FILE_SYSTEMD_USER" "" "" "" true
fi
Logger "Created [$SERVICE_NAME] service in [$SERVICE_DIR_SYSTEMD_SYSTEM] and [$SERVICE_DIR_SYSTEMD_USER]." "NOTICE"
Logger "Can be activated with [systemctl start SERVICE_NAME@instance.conf] where instance.conf is the name of the config file in $CONF_DIR." "NOTICE"
Logger "Can be enabled on boot with [systemctl enable $SERVICE_NAME@instance.conf]." "NOTICE"
Logger "In userland, active with [systemctl --user start $SERVICE_NAME@instance.conf]." "NOTICE"
elif ([ "$init" == "initV" ] && [ -f "$SCRIPT_PATH/$SERVICE_FILE_INIT" ] && [ -d "$SERVICE_DIR_INIT" ]); then elif ([ "$init" == "initV" ] && [ -f "$SCRIPT_PATH/$SERVICE_FILE_INIT" ] && [ -d "$SERVICE_DIR_INIT" ]); then
#CreateDir "$SERVICE_DIR_INIT" #CreateDir "$SERVICE_DIR_INIT"
CopyFile "$SCRIPT_PATH" "$SERVICE_DIR_INIT" "$SERVICE_FILE_INIT" "$SERVICE_FILE_INIT" "755" "" "" true CopyFile "$SCRIPT_PATH" "$SERVICE_DIR_INIT" "$SERVICE_FILE_INIT" "$SERVICE_FILE_INIT" "755" "" "" true
if [ -f "$SCRIPT_PATH/$TARGET_HELPER_SERVICE_FILE_INIT" ]; then
CopyFile "$SCRIPT_PATH" "$SERVICE_DIR_INIT" "$TARGET_HELPER_SERVICE_FILE_INIT" "$TARGET_HELPER_SERVICE_FILE_INIT" "755" "" "" true Logger "Created [$SERVICE_NAME] service in [$SERVICE_DIR_INIT]." "SIMPLE"
Logger "Created optional service [$TARGET_HELPER_SERVICE_NAME] with same specifications as below." "NOTICE" Logger "Can be activated with [service $SERVICE_FILE_INIT start]." "SIMPLE"
fi Logger "Can be enabled on boot with [chkconfig $SERVICE_FILE_INIT on]." "SIMPLE"
Logger "Created [$SERVICE_NAME] service in [$SERVICE_DIR_INIT]." "NOTICE"
Logger "Can be activated with [service $SERVICE_FILE_INIT start]." "NOTICE"
Logger "Can be enabled on boot with [chkconfig $SERVICE_FILE_INIT on]." "NOTICE"
elif ([ "$init" == "openrc" ] && [ -f "$SCRIPT_PATH/$SERVICE_FILE_OPENRC" ] && [ -d "$SERVICE_DIR_OPENRC" ]); then elif ([ "$init" == "openrc" ] && [ -f "$SCRIPT_PATH/$SERVICE_FILE_OPENRC" ] && [ -d "$SERVICE_DIR_OPENRC" ]); then
# Rename service to usual service file # Rename service to usual service file
CopyFile "$SCRIPT_PATH" "$SERVICE_DIR_OPENRC" "$SERVICE_FILE_OPENRC" "$SERVICE_FILE_INIT" "755" "" "" true CopyFile "$SCRIPT_PATH" "$SERVICE_DIR_OPENRC" "$SERVICE_FILE_OPENRC" "$SERVICE_FILE_INIT" "755" "" "" true
if [ -f "$SCRPT_PATH/$TARGET_HELPER_SERVICE_FILE_OPENRC" ]; then
CopyFile "$SCRIPT_PATH" "$SERVICE_DIR_OPENRC" "$TARGET_HELPER_SERVICE_FILE_OPENRC" "$TARGET_HELPER_SERVICE_FILE_OPENRC" "755" "" "" true Logger "Created [$SERVICE_NAME] service in [$SERVICE_DIR_OPENRC]." "SIMPLE"
Logger "Created optional service [$TARGET_HELPER_SERVICE_NAME] with same specifications as below." "NOTICE" Logger "Can be activated with [rc-update add $SERVICE_NAME.instance] where instance is a configuration file found in /etc/osync." "SIMPLE"
fi
Logger "Created [$SERVICE_NAME] service in [$SERVICE_DIR_OPENRC]." "NOTICE"
Logger "Can be activated with [rc-update add $SERVICE_NAME.instance] where instance is a configuration file found in /etc/osync." "NOTICE"
else else
Logger "Cannot properly find how to deal with init on this system. Skipping service file installation." "NOTICE" Logger "Cannot properly find how to deal with init on this system. Skipping service file installation." "SIMPLE"
fi fi
} }
function Statistics { function Statistics {
if type wget > /dev/null 2>&1; then if type wget > /dev/null; then
wget -qO- "$STATS_LINK" > /dev/null 2>&1 wget -qO- "$STATS_LINK" > /dev/null 2>&1
if [ $? == 0 ]; then if [ $? == 0 ]; then
return 0 return 0
fi fi
fi fi
if type curl > /dev/null 2>&1; then if type curl > /dev/null; then
curl "$STATS_LINK" -o /dev/null > /dev/null 2>&1 curl "$STATS_LINK" -o /dev/null > /dev/null 2>&1
if [ $? == 0 ]; then if [ $? == 0 ]; then
return 0 return 0
fi fi
fi fi
Logger "Neiter wget nor curl could be used for. Cannot run statistics. Use the provided link please." "WARN" Logger "Neiter wget nor curl could be used for. Cannot run statistics. Use the provided link please." "SIMPLE"
return 1 return 1
} }
@ -311,12 +306,12 @@ function RemoveFile {
if [ -f "$file" ]; then if [ -f "$file" ]; then
rm -f "$file" rm -f "$file"
if [ $? != 0 ]; then if [ $? != 0 ]; then
Logger "Could not remove file [$file]." "ERROR" Logger "Could not remove file [$file]." "SIMPLE"
else else
Logger "Removed file [$file]." "NOTICE" Logger "Removed file [$file]." "SIMPLE"
fi fi
else else
Logger "File [$file] not found. Skipping." "NOTICE" Logger "File [$file] not found. Skipping." "SIMPLE"
fi fi
} }
@ -330,25 +325,13 @@ function RemoveAll {
if [ ! -f "$BIN_DIR/osync.sh" ] && [ ! -f "$BIN_DIR/obackup.sh" ]; then # Check if any other program requiring ssh filter is present before removal if [ ! -f "$BIN_DIR/osync.sh" ] && [ ! -f "$BIN_DIR/obackup.sh" ]; then # Check if any other program requiring ssh filter is present before removal
RemoveFile "$BIN_DIR/$SSH_FILTER" RemoveFile "$BIN_DIR/$SSH_FILTER"
else else
Logger "Skipping removal of [$BIN_DIR/$SSH_FILTER] because other programs present that need it." "NOTICE" Logger "Skipping removal of [$BIN_DIR/$SSH_FILTER] because other programs present that need it." "SIMPLE"
fi fi
# Try to uninstall every possible service file
if [ $init == "systemd" ]; then
RemoveFile "$SERVICE_DIR_SYSTEMD_SYSTEM/$SERVICE_FILE_SYSTEMD_SYSTEM" RemoveFile "$SERVICE_DIR_SYSTEMD_SYSTEM/$SERVICE_FILE_SYSTEMD_SYSTEM"
RemoveFile "$SERVICE_DIR_SYSTEMD_USER/$SERVICE_FILE_SYSTEMD_USER" RemoveFile "$SERVICE_DIR_SYSTEMD_USER/$SERVICE_FILE_SYSTEMD_USER"
RemoveFile "$SERVICE_DIR_SYSTEMD_SYSTEM/$TARGET_HELPER_SERVICE_FILE_SYSTEMD_SYSTEM"
RemoveFile "$SERVICE_DIR_SYSTEMD_USER/$TARGET_HELPER_SERVICE_FILE_SYSTEMD_USER"
elif [ $init == "initV" ]; then
RemoveFile "$SERVICE_DIR_INIT/$SERVICE_FILE_INIT" RemoveFile "$SERVICE_DIR_INIT/$SERVICE_FILE_INIT"
RemoveFile "$SERVICE_DIR_INIT/$TARGET_HELPER_SERVICE_FILE_INIT"
elif [ $init == "openrc" ]; then Logger "Skipping configuration files in [$CONF_DIR]. You may remove this directory manually." "SIMPLE"
RemoveFile "$SERVICE_DIR_OPENRC/$SERVICE_FILE_OPENRC"
RemoveFile "$SERVICE_DIR_OPENRC/$TARGET_HELPER_SERVICE_FILE_OPENRC"
else
Logger "Can uninstall only from initV, systemd or openRC." "WARN"
fi
Logger "Skipping configuration files in [$CONF_DIR]. You may remove this directory manually." "NOTICE"
} }
function Usage { function Usage {
@ -361,88 +344,15 @@ function Usage {
exit 127 exit 127
} }
############################## Script entry point
function GetCommandlineArguments {
for i in "$@"; do
case $i in
--prefix=*)
FAKEROOT="${i##*=}"
;;
--silent)
_LOGGER_SILENT=true
;;
--no-stats)
_STATS=0
;;
--remove)
ACTION="uninstall"
;;
--help|-h|-?)
Usage
;;
*)
Logger "Unknown option '$i'" "ERROR"
Usage
exit
;;
esac
done
}
GetCommandlineArguments "$@"
CONF_DIR=$FAKEROOT/etc/$PROGRAM
BIN_DIR="$FAKEROOT/usr/local/bin"
SERVICE_DIR_INIT=$FAKEROOT/etc/init.d
# Should be /usr/lib/systemd/system, but /lib/systemd/system exists on debian & rhel / fedora
SERVICE_DIR_SYSTEMD_SYSTEM=$FAKEROOT/lib/systemd/system
SERVICE_DIR_SYSTEMD_USER=$FAKEROOT/etc/systemd/user
SERVICE_DIR_OPENRC=$FAKEROOT/etc/init.d
if [ "$PROGRAM" == "osync" ]; then
SERVICE_NAME="osync-srv"
TARGET_HELPER_SERVICE_NAME="osync-target-helper-srv"
TARGET_HELPER_SERVICE_FILE_INIT="$TARGET_HELPER_SERVICE_NAME"
TARGET_HELPER_SERVICE_FILE_SYSTEMD_SYSTEM="$TARGET_HELPER_SERVICE_NAME@.service"
TARGET_HELPER_SERVICE_FILE_SYSTEMD_USER="$TARGET_HELPER_SERVICE_NAME@.service.user"
TARGET_HELPER_SERVICE_FILE_OPENRC="$TARGET_HELPER_SERVICE_NAME-openrc"
elif [ "$PROGRAM" == "pmocr" ]; then
SERVICE_NAME="pmocr-srv"
fi
SERVICE_FILE_INIT="$SERVICE_NAME"
SERVICE_FILE_SYSTEMD_SYSTEM="$SERVICE_NAME@.service"
SERVICE_FILE_SYSTEMD_USER="$SERVICE_NAME@.service.user"
SERVICE_FILE_OPENRC="$SERVICE_NAME-openrc"
## Generic code
trap GenericTrapQuit TERM EXIT HUP QUIT
if [ ! -w "$(dirname $LOG_FILE)" ]; then
echo "Cannot write to log [$(dirname $LOG_FILE)]."
else
Logger "Script begin, logging to [$LOG_FILE]." "DEBUG"
fi
# Set default umask
umask 0022
GetLocalOS GetLocalOS
SetLocalOSSettings SetLocalOSSettings
# On Mac OS this always produces a warning which causes the installer to fail with exit code 2
# Since we know it won't work anyway, and that's fine, just skip this step
if $DO_INIT; then
GetInit GetInit
fi
STATS_LINK="http://instcount.netpower.fr?program=$PROGRAM&version=$PROGRAM_VERSION&os=$OS&action=$ACTION" STATS_LINK="http://instcount.netpower.fr?program=$PROGRAM&version=$PROGRAM_VERSION&os=$OS&action=$ACTION"
if [ "$ACTION" == "uninstall" ]; then if [ "$ACTION" == "uninstall" ]; then
RemoveAll RemoveAll
Logger "$PROGRAM uninstalled." "NOTICE" Logger "$PROGRAM uninstalled." "SIMPLE"
else else
CreateDir "$CONF_DIR" CreateDir "$CONF_DIR"
CreateDir "$BIN_DIR" CreateDir "$BIN_DIR"
@ -451,10 +361,10 @@ else
if [ "$PROGRAM" == "osync" ] || [ "$PROGRAM" == "pmocr" ]; then if [ "$PROGRAM" == "osync" ] || [ "$PROGRAM" == "pmocr" ]; then
CopyServiceFiles CopyServiceFiles
fi fi
Logger "$PROGRAM installed. Use with $BIN_DIR/$PROGRAM_BINARY" "NOTICE" Logger "$PROGRAM installed. Use with $BIN_DIR/$PROGRAM" "SIMPLE"
if [ "$PROGRAM" == "osync" ] || [ "$PROGRAM" == "obackup" ]; then if [ "$PROGRAM" == "osync" ] || [ "$PROGRAM" == "obackup" ]; then
echo "" echo ""
Logger "If connecting remotely, consider setup ssh filter to enhance security." "NOTICE" Logger "If connecting remotely, consider setup ssh filter to enhance security." "SIMPLE"
echo "" echo ""
fi fi
fi fi
@ -463,7 +373,7 @@ if [ $_STATS -eq 1 ]; then
if [ $_LOGGER_SILENT == true ]; then if [ $_LOGGER_SILENT == true ]; then
Statistics Statistics
else else
Logger "In order to make usage statistics, the script would like to connect to $STATS_LINK" "NOTICE" Logger "In order to make usage statistics, the script would like to connect to $STATS_LINK" "SIMPLE"
read -r -p "No data except those in the url will be send. Allow [Y/n] " response read -r -p "No data except those in the url will be send. Allow [Y/n] " response
case $response in case $response in
[nN]) [nN])

File diff suppressed because it is too large Load Diff

2860
dev/debug_osync_target_helper.sh Executable file

File diff suppressed because it is too large Load Diff

View File

@ -1,13 +1,10 @@
#!/usr/bin/env bash #!/usr/bin/env bash
## MERGE 2020031501 ## MERGE 2018062501
## Merges ofunctions.sh and n_program.sh into program.sh ## Merges ofunctions.sh and n_program.sh into program.sh
## Adds installer ## Adds installer
PROGRAM=merge
INSTANCE_ID=dev
function Usage { function Usage {
echo "Merges ofunctions.sh and n_program.sh into debug_program.sh and ../program.sh" echo "Merges ofunctions.sh and n_program.sh into debug_program.sh and ../program.sh"
echo "Usage" echo "Usage"
@ -15,24 +12,30 @@ function Usage {
} }
function __PREPROCESSOR_Merge { function __PREPROCESSOR_Merge {
local nPROGRAM="$1" local PROGRAM="$1"
if [ -f "$nPROGRAM" ]; then VERSION=$(grep "PROGRAM_VERSION=" n_$PROGRAM.sh)
Logger "$nPROGRAM is not found in local path." "CRITICAL"
exit 1
fi
VERSION=$(grep "PROGRAM_VERSION=" n_$nPROGRAM.sh)
VERSION=${VERSION#*=} VERSION=${VERSION#*=}
__PREPROCESSOR_Constants __PREPROCESSOR_Constants
__PREPROCESSOR_Unexpand "n_$nPROGRAM.sh" "debug_$nPROGRAM.sh" source "ofunctions.sh"
if [ $? != 0 ]; then
echo "Please run $0 in dev directory with ofunctions.sh"
exit 1
fi
__PREPROCESSOR_Unexpand "n_$PROGRAM.sh" "debug_$PROGRAM.sh"
for subset in "${__PREPROCESSOR_SUBSETS[@]}"; do for subset in "${__PREPROCESSOR_SUBSETS[@]}"; do
__PREPROCESSOR_MergeSubset "$subset" "${subset//SUBSET/SUBSET END}" "ofunctions.sh" "debug_$nPROGRAM.sh" __PREPROCESSOR_MergeSubset "$subset" "${subset//SUBSET/SUBSET END}" "ofunctions.sh" "debug_$PROGRAM.sh"
done done
__PREPROCESSOR_CleanDebug "debug_$nPROGRAM.sh" "../$nPROGRAM.sh" __PREPROCESSOR_CleanDebug "$PROGRAM"
rm -f tmp_$PROGRAM.sh
if [ $? != 0 ]; then
Logger "Cannot remove tmp_$PROGRAM.sh" "SIMPLE"
exit 1
fi
} }
function __PREPROCESSOR_Constants { function __PREPROCESSOR_Constants {
@ -43,10 +46,7 @@ function __PREPROCESSOR_Constants {
__PREPROCESSOR_SUBSETS=( __PREPROCESSOR_SUBSETS=(
'#### OFUNCTIONS FULL SUBSET ####' '#### OFUNCTIONS FULL SUBSET ####'
'#### OFUNCTIONS MINI SUBSET ####' '#### OFUNCTIONS MINI SUBSET ####'
'#### OFUNCTIONS MICRO SUBSET ####'
'#### PoorMansRandomGenerator SUBSET ####'
'#### _OFUNCTIONS_BOOTSTRAP SUBSET ####' '#### _OFUNCTIONS_BOOTSTRAP SUBSET ####'
'#### RUN_DIR SUBSET ####'
'#### DEBUG SUBSET ####' '#### DEBUG SUBSET ####'
'#### TrapError SUBSET ####' '#### TrapError SUBSET ####'
'#### RemoteLogger SUBSET ####' '#### RemoteLogger SUBSET ####'
@ -60,9 +60,6 @@ function __PREPROCESSOR_Constants {
'#### GetConfFileValue SUBSET ####' '#### GetConfFileValue SUBSET ####'
'#### SetConfFileValue SUBSET ####' '#### SetConfFileValue SUBSET ####'
'#### CheckRFC822 SUBSET ####' '#### CheckRFC822 SUBSET ####'
'#### CleanUp SUBSET ####'
'#### GenericTrapQuit SUBSET ####'
'#### FileMove SUBSET ####'
) )
} }
@ -72,7 +69,7 @@ function __PREPROCESSOR_Unexpand {
unexpand "$source" > "$destination" unexpand "$source" > "$destination"
if [ $? != 0 ]; then if [ $? != 0 ]; then
Logger "Cannot unexpand [$source] to [$destination]." "CRITICAL" Logger "Cannot unexpand [$source] to [$destination]." "SIMPLE"
exit 1 exit 1
fi fi
} }
@ -85,75 +82,64 @@ function __PREPROCESSOR_MergeSubset {
sed -n "/$subsetBegin/,/$subsetEnd/p" "$subsetFile" > "$subsetFile.$subsetBegin" sed -n "/$subsetBegin/,/$subsetEnd/p" "$subsetFile" > "$subsetFile.$subsetBegin"
if [ $? != 0 ]; then if [ $? != 0 ]; then
Logger "Cannot sed subset [$subsetBegin -- $subsetEnd] in [$subsetFile]." "CRTICIAL" Logger "Cannot sed subset [$subsetBegin -- $subsetEnd] in [$subsetFile]." "SIMPLE"
exit 1 exit 1
fi fi
sed "/include $subsetBegin/r $subsetFile.$subsetBegin" "$mergedFile" | grep -v -E "$subsetBegin\$|$subsetEnd\$" > "$mergedFile.tmp" sed "/include $subsetBegin/r $subsetFile.$subsetBegin" "$mergedFile" | grep -v -E "$subsetBegin\$|$subsetEnd\$" > "$mergedFile.tmp"
if [ $? != 0 ]; then if [ $? != 0 ]; then
Logger "Cannot add subset [$subsetBegin] to [$mergedFile]." "CRITICAL" Logger "Cannot add subset [$subsetBegin] to [$mergedFile]." "SIMPLE"
exit 1 exit 1
fi fi
rm -f "$subsetFile.$subsetBegin" rm -f "$subsetFile.$subsetBegin"
if [ $? != 0 ]; then if [ $? != 0 ]; then
Logger "Cannot remove temporary subset [$subsetFile.$subsetBegin]." "CRITICAL" Logger "Cannot remove temporary subset [$subsetFile.$subsetBegin]." "SIMPLE"
exit 1 exit 1
fi fi
rm -f "$mergedFile" rm -f "$mergedFile"
if [ $? != 0 ]; then if [ $? != 0 ]; then
Logger "Cannot remove merged original file [$mergedFile]." "CRITICAL" Logger "Cannot remove merged original file [$mergedFile]." "SIMPLE"
exit 1 exit 1
fi fi
mv "$mergedFile.tmp" "$mergedFile" mv "$mergedFile.tmp" "$mergedFile"
if [ $? != 0 ]; then if [ $? != 0 ]; then
Logger "Cannot move merged tmp file to original [$mergedFile]." "CRITICAL" Logger "Cannot move merged tmp file to original [$mergedFile]." "SIMPLE"
exit 1 exit 1
fi fi
} }
function __PREPROCESSOR_CleanDebug { function __PREPROCESSOR_CleanDebug {
local source="${1}" local PROGRAM="$1"
local destination="${2:-$source}"
sed '/'$PARANOIA_DEBUG_BEGIN'/,/'$PARANOIA_DEBUG_END'/d' "$source" | grep -v "$PARANOIA_DEBUG_LINE" > "$destination.tmp" sed '/'$PARANOIA_DEBUG_BEGIN'/,/'$PARANOIA_DEBUG_END'/d' debug_$PROGRAM.sh | grep -v "$PARANOIA_DEBUG_LINE" > ../$PROGRAM.sh
if [ $? != 0 ]; then if [ $? != 0 ]; then
Logger "Cannot remove PARANOIA_DEBUG code from standard build." "CRITICAL" Logger "Cannot remove PARANOIA_DEBUG code from standard build." "SIMPLE"
exit 1
fi
chmod +x "debug_$PROGRAM.sh"
if [ $? != 0 ]; then
Logger "Cannot chmod debug_$PROGRAM.sh" "SIMPLE"
exit 1 exit 1
else else
mv -f "$destination.tmp" "$destination" Logger "Prepared ./debug_$PROGRAM.sh" "SIMPLE"
if [ $? -ne 0 ]; then
Logger "Cannot move [$destination.tmp] to [$destination]." "CRITICAL"
exit 1
fi fi
fi chmod +x "../$PROGRAM.sh"
chmod +x "$source"
if [ $? != 0 ]; then if [ $? != 0 ]; then
Logger "Cannot chmod [$source]." "CRITICAL" Logger "Cannot chmod $PROGRAM.sh" "SIMPLE"
exit 1 exit 1
else else
Logger "Prepared [$source]." "NOTICE" Logger "Prepared ../$PROGRAM.sh" "SIMPLE"
fi
if [ "$source" != "$destination" ]; then
chmod +x "$destination"
if [ $? != 0 ]; then
Logger "Cannot chmod [$destination]." "CRITICAL"
exit 1
else
Logger "Prepared [$destination]." "NOTICE"
fi
fi fi
} }
function __PREPROCESSOR_CopyCommons { function __PREPROCESSOR_CopyCommons {
local nPROGRAM="$1" local PROGRAM="$1"
sed "s/\[prgname\]/$nPROGRAM/g" common_install.sh > ../install.sh sed "s/\[prgname\]/$PROGRAM/g" common_install.sh > ../install.sh
if [ $? != 0 ]; then if [ $? != 0 ]; then
Logger "Cannot assemble install." "CRITICAL" Logger "Cannot assemble install." "SIMPLE"
exit 1 exit 1
fi fi
@ -161,34 +147,45 @@ function __PREPROCESSOR_CopyCommons {
__PREPROCESSOR_MergeSubset "$subset" "${subset//SUBSET/SUBSET END}" "ofunctions.sh" "../install.sh" __PREPROCESSOR_MergeSubset "$subset" "${subset//SUBSET/SUBSET END}" "ofunctions.sh" "../install.sh"
done done
__PREPROCESSOR_CleanDebug "../install.sh" #sed "s/\[version\]/$VERSION/g" ../tmp_install.sh > ../install.sh
#if [ $? != 0 ]; then
# Logger "Cannot change install version." "SIMPLE"
# exit 1
#fi
if [ -f "common_batch.sh" ]; then if [ -f "common_batch.sh" ]; then
sed "s/\[prgname\]/$nPROGRAM/g" common_batch.sh > ../$nPROGRAM-batch.sh sed "s/\[prgname\]/$PROGRAM/g" common_batch.sh > ../$PROGRAM-batch.sh
if [ $? != 0 ]; then if [ $? != 0 ]; then
Logger "Cannot assemble batch runner." "CRITICAL" Logger "Cannot assemble batch runner." "SIMPLE"
exit 1 exit 1
fi fi
chmod +x ../$PROGRAM-batch.sh
for subset in "${__PREPROCESSOR_SUBSETS[@]}"; do if [ $? != 0 ]; then
__PREPROCESSOR_MergeSubset "$subset" "${subset//SUBSET/SUBSET END}" "ofunctions.sh" "../$nPROGRAM-batch.sh" Logger "Cannot chmod $PROGRAM-batch.sh" "SIMPLE"
done exit 1
else
__PREPROCESSOR_CleanDebug "../$nPROGRAM-batch.sh" Logger "Prepared ../$PROGRAM-batch.sh" "SIMPLE"
fi
fi
chmod +x ../install.sh
if [ $? != 0 ]; then
Logger "Cannot chmod install.sh" "SIMPLE"
exit 1
else
Logger "Prepared ../install.sh" "SIMPLE"
fi
rm -f ../tmp_install.sh
if [ $? != 0 ]; then
Logger "Cannot chmod $PROGRAM.sh" "SIMPLE"
exit 1
fi fi
} }
# If sourced don't do anything # If sourced don't do anything
if [ "$(basename $0)" == "merge.sh" ]; then if [ "$(basename $0)" == "merge.sh" ]; then
source "./ofunctions.sh"
if [ $? != 0 ]; then
echo "Please run $0 in dev directory with ofunctions.sh"
exit 1
fi
trap GenericTrapQuit TERM EXIT HUP QUIT
if [ "$1" == "osync" ]; then if [ "$1" == "osync" ]; then
__PREPROCESSOR_Merge osync __PREPROCESSOR_Merge osync
__PREPROCESSOR_Merge osync_target_helper
__PREPROCESSOR_CopyCommons osync __PREPROCESSOR_CopyCommons osync
elif [ "$1" == "obackup" ]; then elif [ "$1" == "obackup" ]; then
__PREPROCESSOR_Merge obackup __PREPROCESSOR_Merge obackup

File diff suppressed because it is too large Load Diff

455
dev/n_osync_target_helper.sh Executable file
View File

@ -0,0 +1,455 @@
#!/usr/bin/env bash
PROGRAM="osync-target-helper" # Rsync based two way sync engine with fault tolerance
AUTHOR="(C) 2013-2017 by Orsiris de Jong"
CONTACT="http://www.netpower.fr/osync - ozy@netpower.fr"
PROGRAM_VERSION=1.2.2-dev
PROGRAM_BUILD=2017061901
IS_STABLE=no
include #### OFUNCTIONS FULL SUBSET ####
# If using "include" statements, make sure the script does not get executed unless it's loaded by bootstrap
include #### _OFUNCTIONS_BOOTSTRAP SUBSET ####
[ "$_OFUNCTIONS_BOOTSTRAP" != true ] && echo "Please use bootstrap.sh to load this dev version of $(basename $0)" && exit 1
_LOGGER_PREFIX="time"
## Working directory. This directory exists in any replica and contains state files, backups, soft deleted files etc
OSYNC_DIR=".osync_workdir"
function TrapQuit {
local exitcode
# Get ERROR / WARN alert flags from subprocesses that call Logger
if [ -f "$RUN_DIR/$PROGRAM.Logger.warn.$SCRIPT_PID.$TSTAMP" ]; then
WARN_ALERT=true
fi
if [ -f "$RUN_DIR/$PROGRAM.Logger.error.$SCRIPT_PID.$TSTAMP" ]; then
ERROR_ALERT=true
fi
if [ $ERROR_ALERT == true ]; then
Logger "$PROGRAM finished with errors." "ERROR"
if [ "$_DEBUG" != "yes" ]
then
SendAlert
else
Logger "Debug mode, no alert mail will be sent." "NOTICE"
fi
exitcode=1
elif [ $WARN_ALERT == true ]; then
Logger "$PROGRAM finished with warnings." "WARN"
if [ "$_DEBUG" != "yes" ]
then
SendAlert
else
Logger "Debug mode, no alert mail will be sent." "NOTICE"
fi
exitcode=2 # Warning exit code must not force daemon mode to quit
else
Logger "$PROGRAM finished." "ALWAYS"
exitcode=0
fi
CleanUp
KillChilds $SCRIPT_PID > /dev/null 2>&1
exit $exitcode
}
function CheckEnvironment {
__CheckArguments 0 $# "$@" #__WITH_PARANOIA_DEBUG
if ! type ssh > /dev/null 2>&1 ; then
Logger "ssh not present. Cannot start sync." "CRITICAL"
exit 1
fi
if [ "$SSH_PASSWORD_FILE" != "" ] && ! type sshpass > /dev/null 2>&1 ; then
Logger "sshpass not present. Cannot use password authentication." "CRITICAL"
exit 1
fi
}
# Only gets checked in config file mode where all values should be present
function CheckCurrentConfig {
__CheckArguments 0 $# "$@" #__WITH_PARANOIA_DEBUG
# Check all variables that should contain "yes" or "no"
declare -a yes_no_vars=(SUDO_EXEC SSH_COMPRESSION SSH_IGNORE_KNOWN_HOSTS REMOTE_HOST_PING)
for i in "${yes_no_vars[@]}"; do
test="if [ \"\$$i\" != \"yes\" ] && [ \"\$$i\" != \"no\" ]; then Logger \"Bogus $i value [\$$i] defined in config file. Correct your config file or update it using the update script if using and old version.\" \"CRITICAL\"; exit 1; fi"
eval "$test"
done
# Check all variables that should contain a numerical value >= 0
declare -a num_vars=(MIN_WAIT MAX_WAIT)
for i in "${num_vars[@]}"; do
test="if [ $(IsNumericExpand \"\$$i\") -eq 0 ]; then Logger \"Bogus $i value [\$$i] defined in config file. Correct your config file or update it using the update script if using and old version.\" \"CRITICAL\"; exit 1; fi"
eval "$test"
done
}
# Gets checked in quicksync and config file mode
function CheckCurrentConfigAll {
__CheckArguments 0 $# "$@" #__WITH_PARANOIA_DEBUG
local tmp
if [ "$INSTANCE_ID" == "" ]; then
Logger "No INSTANCE_ID defined in config file." "CRITICAL"
exit 1
fi
if [ "$INITIATOR_SYNC_DIR" == "" ]; then
Logger "No INITIATOR_SYNC_DIR set in config file." "CRITICAL"
exit 1
fi
if [ "$TARGET_SYNC_DIR" == "" ]; then
Logger "Not TARGET_SYNC_DIR set in config file." "CRITICAL"
exit 1
fi
if ([ ! -f "$SSH_RSA_PRIVATE_KEY" ] && [ ! -f "$SSH_PASSWORD_FILE" ]); then
Logger "Cannot find rsa private key [$SSH_RSA_PRIVATE_KEY] nor password file [$SSH_PASSWORD_FILE]. No authentication method provided." "CRITICAL"
exit 1
fi
}
function TriggerInitiatorUpdate {
__CheckArguments 0 $# "$@" #__WITH_PARANOIA_DEBUG
$SSH_CMD env _REMOTE_TOKEN="$_REMOTE_TOKEN" \
env _DEBUG="'$_DEBUG'" env _PARANOIA_DEBUG="'$_PARANOIA_DEBUG'" env _LOGGER_SILENT="'$_LOGGER_SILENT'" env _LOGGER_VERBOSE="'$_LOGGER_VERBOSE'" env _LOGGER_PREFIX="'$_LOGGER_PREFIX'" env _LOGGER_ERR_ONLY="'$_LOGGER_ERR_ONLY'" \
env PROGRAM="'$PROGRAM'" env SCRIPT_PID="'$SCRIPT_PID'" TSTAMP="'$TSTAMP'" env INSTANCE_ID="'$INSTANCE_ID'" \
env PUSH_FILE="'$(EscapeSpaces "${INITIATOR[$__updateTriggerFIle]}")'" \
env LC_ALL=C $COMMAND_SUDO' bash -s' << 'ENDSSH' >> "$RUN_DIR/$PROGRAM.${FUNCNAME[0]}.$SCRIPT_PID.$TSTAMP" 2>&1
include #### DEBUG SUBSET ####
include #### TrapError SUBSET ####
include #### RemoteLogger SUBSET ####
echo "$INSTANCE_ID $(date '+%Y%m%dT%H%M%S.%N')" >> "$PUSH_FILE"
ENDSSH
if [ -s "$RUN_DIR/$PROGRAM.${FUNCNAME[0]}.$SCRIPT_PID.$TSTAMP" ] || [ $? != 0 ]; then
(
_LOGGER_PREFIX="RR"
Logger "$(cat $RUN_DIR/$PROGRAM.${FUNCNAME[0]}.$SCRIPT_PID.$TSTAMP)" "ERROR"
)
return 1
fi
return 0
}
function Init {
__CheckArguments 0 $# "$@" #__WITH_PARANOIA_DEBUG
# Set error exit code if a piped command fails
set -o pipefail
set -o errtrace
trap TrapQuit TERM EXIT HUP QUIT
local uri
local hosturiandpath
local hosturi
## Test if target dir is a ssh uri, and if yes, break it down it its values
if [ "${INITIATOR_SYNC_DIR:0:6}" == "ssh://" ]; then
REMOTE_OPERATION="yes"
# remove leadng 'ssh://'
uri=${INITIATOR_SYNC_DIR#ssh://*}
if [[ "$uri" == *"@"* ]]; then
# remove everything after '@'
REMOTE_USER=${uri%@*}
else
REMOTE_USER=$LOCAL_USER
fi
if [ "$SSH_RSA_PRIVATE_KEY" == "" ]; then
if [ ! -f "$SSH_PASSWORD_FILE" ]; then
# Assume that there might exist a standard rsa key
SSH_RSA_PRIVATE_KEY=~/.ssh/id_rsa
fi
fi
# remove everything before '@'
hosturiandpath=${uri#*@}
# remove everything after first '/'
hosturi=${hosturiandpath%%/*}
if [[ "$hosturi" == *":"* ]]; then
REMOTE_PORT=${hosturi##*:}
else
REMOTE_PORT=22
fi
REMOTE_HOST=${hosturi%%:*}
# remove everything before first '/'
TARGET_SYNC_DIR=${hosturiandpath#*/}
else
Logger "No valid remote initiator URI found in [$INITIATOR_SYNC_DIR]." "CRITICAL"
exit 1
fi
if [ "$INITIATOR_SYNC_DIR" == "" ] || [ "$TARGET_SYNC_DIR" == "" ]; then
Logger "Initiator or target path empty." "CRITICAL"
exit 1
fi
## Make sure there is only one trailing slash on path
INITIATOR_SYNC_DIR="${INITIATOR_SYNC_DIR%/}/"
TARGET_SYNC_DIR="${TARGET_SYNC_DIR%/}/"
# Expand ~ if exists
INITIATOR_SYNC_DIR="${INITIATOR_SYNC_DIR/#\~/$HOME}"
TARGET_SYNC_DIR="${TARGET_SYNC_DIR/#\~/$HOME}"
SSH_RSA_PRIVATE_KEY="${SSH_RSA_PRIVATE_KEY/#\~/$HOME}"
SSH_PASSWORD_FILE="${SSH_PASSWORD_FILE/#\~/$HOME}"
## Replica format
## Why the f*** does bash not have simple objects ?
# Local variables used for state filenames
local lockFilename="lock"
local stateDir="state"
local backupDir="backup"
local deleteDir="deleted"
local partialDir="_partial"
local lastAction="last-action"
local resumeCount="resume-count"
if [ "$_DRYRUN" == true ]; then
local drySuffix="-dry"
else
local drySuffix=
fi
# The following associative like array definitions are used for bash ver < 4 compat
readonly __type=0
readonly __replicaDir=1
readonly __lockFile=2
readonly __stateDir=3
readonly __backupDir=4
readonly __deleteDir=5
readonly __partialDir=6
readonly __initiatorLastActionFile=7
readonly __targetLastActionFile=8
readonly __resumeCount=9
readonly __treeCurrentFile=10
readonly __treeAfterFile=11
readonly __treeAfterFileNoSuffix=12
readonly __deletedListFile=13
readonly __failedDeletedListFile=14
readonly __successDeletedListFile=15
readonly __timestampCurrentFile=16
readonly __timestampAfterFile=17
readonly __timestampAfterFileNoSuffix=18
readonly __conflictListFile=19
readonly __updateTriggerFile=20
INITIATOR=()
INITIATOR[$__type]='initiator'
INITIATOR[$__replicaDir]="$INITIATOR_SYNC_DIR"
INITIATOR[$__lockFile]="$INITIATOR_SYNC_DIR$OSYNC_DIR/$lockFilename"
INITIATOR[$__stateDir]="$OSYNC_DIR/$stateDir"
INITIATOR[$__backupDir]="$OSYNC_DIR/$backupDir"
INITIATOR[$__deleteDir]="$OSYNC_DIR/$deleteDir"
INITIATOR[$__partialDir]="$OSYNC_DIR/$partialDir"
INITIATOR[$__initiatorLastActionFile]="$INITIATOR_SYNC_DIR$OSYNC_DIR/$stateDir/initiator-$lastAction-$INSTANCE_ID$drySuffix"
INITIATOR[$__targetLastActionFile]="$INITIATOR_SYNC_DIR$OSYNC_DIR/$stateDir/target-$lastAction-$INSTANCE_ID$drySuffix"
INITIATOR[$__resumeCount]="$INITIATOR_SYNC_DIR$OSYNC_DIR/$stateDir/$resumeCount-$INSTANCE_ID$drySuffix"
INITIATOR[$__treeCurrentFile]="-tree-current-$INSTANCE_ID$drySuffix"
INITIATOR[$__treeAfterFile]="-tree-after-$INSTANCE_ID$drySuffix"
INITIATOR[$__treeAfterFileNoSuffix]="-tree-after-$INSTANCE_ID"
INITIATOR[$__deletedListFile]="-deleted-list-$INSTANCE_ID$drySuffix"
INITIATOR[$__failedDeletedListFile]="-failed-delete-$INSTANCE_ID$drySuffix"
INITIATOR[$__successDeletedListFile]="-success-delete-$INSTANCE_ID$drySuffix"
INITIATOR[$__timestampCurrentFile]="-timestamps-current-$INSTANCE_ID$drySuffix"
INITIATOR[$__timestampAfterFile]="-timestamps-after-$INSTANCE_ID$drySuffix"
INITIATOR[$__timestampAfterFileNoSuffix]="-timestamps-after-$INSTANCE_ID"
INITIATOR[$__conflictListFile]="conflicts-$INSTANCE_ID$drySuffix"
INITIATOR[$__updateTriggerFile]="$INITIATOR_SYNC_DIR$OSYNC_DIR/.osnyc-update.push"
TARGET=()
TARGET[$__type]='target'
TARGET[$__replicaDir]="$TARGET_SYNC_DIR"
TARGET[$__lockFile]="$TARGET_SYNC_DIR$OSYNC_DIR/$lockFilename"
TARGET[$__stateDir]="$OSYNC_DIR/$stateDir"
TARGET[$__backupDir]="$OSYNC_DIR/$backupDir"
TARGET[$__deleteDir]="$OSYNC_DIR/$deleteDir"
TARGET[$__partialDir]="$OSYNC_DIR/$partialDir" # unused
TARGET[$__initiatorLastActionFile]="$TARGET_SYNC_DIR$OSYNC_DIR/$stateDir/initiator-$lastAction-$INSTANCE_ID$drySuffix" # unused
TARGET[$__targetLastActionFile]="$TARGET_SYNC_DIR$OSYNC_DIR/$stateDir/target-$lastAction-$INSTANCE_ID$drySuffix" # unused
TARGET[$__resumeCount]="$TARGET_SYNC_DIR$OSYNC_DIR/$stateDir/$resumeCount-$INSTANCE_ID$drySuffix" # unused
TARGET[$__treeCurrentFile]="-tree-current-$INSTANCE_ID$drySuffix" # unused
TARGET[$__treeAfterFile]="-tree-after-$INSTANCE_ID$drySuffix" # unused
TARGET[$__treeAfterFileNoSuffix]="-tree-after-$INSTANCE_ID" # unused
TARGET[$__deletedListFile]="-deleted-list-$INSTANCE_ID$drySuffix" # unused
TARGET[$__failedDeletedListFile]="-failed-delete-$INSTANCE_ID$drySuffix"
TARGET[$__successDeletedListFile]="-success-delete-$INSTANCE_ID$drySuffix"
TARGET[$__timestampCurrentFile]="-timestamps-current-$INSTANCE_ID$drySuffix"
TARGET[$__timestampAfterFile]="-timestamps-after-$INSTANCE_ID$drySuffix"
TARGET[$__timestampAfterFileNoSuffix]="-timestamps-after-$INSTANCE_ID"
TARGET[$__conflictListFile]="conflicts-$INSTANCE_ID$drySuffix"
TARGET[$__updateTriggerFile]="$TARGET_SYNC_DIR$OSYNC_DIR/.osync-update.push"
}
function Usage {
__CheckArguments 0 $# "$@" #__WITH_PARANOIA_DEBUG
if [ "$IS_STABLE" != "yes" ]; then
echo -e "\e[93mThis is an unstable dev build. Please use with caution.\e[0m"
fi
echo "$PROGRAM $PROGRAM_VERSION $PROGRAM_BUILD"
echo "$AUTHOR"
echo "$CONTACT"
echo ""
echo "You must use $PROGRAM with a full blown configuration file."
echo "Usage: $0 /path/to/config/file [OPTIONS]"
echo ""
echo "[OPTIONS]"
echo "--no-prefix Will suppress time / date suffix from output"
echo "--silent Will run osync without any output to stdout, used for cron jobs"
echo "--errors-only Output only errors (can be combined with silent or verbose)"
echo "--verbose Increases output"
echo "--on-changes Will launch a sync task after a short wait period if there is some file activity on initiator replica. You should try daemon mode instead"
echo ""
exit 128
}
function OnChangesHelper {
__CheckArguments 0 $# "$@" #__WITH_PARANOIA_DEBUG
local cmd
local retval
if [ "$LOCAL_OS" == "MacOSX" ]; then
if ! type fswatch > /dev/null 2>&1 ; then
Logger "No inotifywait command found. Cannot monitor changes." "CRITICAL"
exit 1
fi
else
if ! type inotifywait > /dev/null 2>&1 ; then
Logger "No inotifywait command found. Cannot monitor changes." "CRITICAL"
exit 1
fi
fi
if [ ! -d "$TARGET_SYNC_DIR" ]; then
Logger "Target directory [$TARGET_SYNC_DIR] does not exist. Cannot monitor." "CRITICAL"
exit 1
fi
Logger "#### Running $PROGRAM in file monitor mode." "NOTICE"
while true; do
if [ "$LOCAL_OS" == "MacOSX" ]; then
fswatch $RSYNC_PATTERNS $RSYNC_PARTIAL_EXCLUDE --exclude "$OSYNC_DIR" -1 "$TARGET_SYNC_DIR" > /dev/null &
# Mac fswatch doesn't have timeout switch, replacing wait $! with WaitForTaskCompletion without warning nor spinner and increased SLEEP_TIME to avoid cpu hogging. This sims wait $! with timeout
WaitForTaskCompletion $! 0 $MAX_WAIT 1 0 true false true
else
inotifywait $RSYNC_PATTERNS $RSYNC_PARTIAL_EXCLUDE --exclude "$OSYNC_DIR" -qq -r -e create -e modify -e delete -e move -e attrib --timeout "$MAX_WAIT" "$TARGET_SYNC_DIR" &
wait $!
fi
retval=$?
if [ $retval -eq 0 ]; then
Logger "#### Changes detected, waiting $MIN_WAIT seconds before triggering update on initiator." "NOTICE"
sleep $MIN_WAIT
# inotifywait --timeout result is 2, WaitForTaskCompletion HardTimeout is 1
elif [ "$LOCAL_OS" == "MacOSX" ]; then
Logger "#### Changes or error detected, waiting $MIN_WAIT seconds before triggering update on initiator." "NOTICE"
elif [ $retval -eq 2 ]; then
Logger "#### $MAX_WAIT timeout reached, running sync." "NOTICE"
elif [ $retval -eq 1 ]; then
Logger "#### inotify error detected, waiting $MIN_WAIT seconds before triggering update on initiator." "ERROR" $retval
sleep $MIN_WAIT
fi
TriggerInitiatorUpdate
done
}
#### SCRIPT ENTRY POINT
DESTINATION_MAILS=""
ERROR_ALERT=false
WARN_ALERT=false
if [ $# -eq 0 ]
then
Usage
fi
first=1
for i in "$@"; do
case $i in
--silent)
_LOGGER_SILENT=true
;;
--verbose)
_LOGGER_VERBOSE=true
;;
--help|-h|--version|-v)
Usage
;;
--errors-only)
_LOGGER_ERR_ONLY=true
;;
--no-prefix)
_LOGGER_PREFIX=""
;;
*)
if [ $first == "0" ]; then
Logger "Unknown option '$i'" "CRITICAL"
Usage
fi
;;
esac
first=0
done
# Remove leading space if there is one
opts="${opts# *}"
ConfigFile="${1}"
LoadConfigFile "$ConfigFile"
if [ "$LOGFILE" == "" ]; then
if [ -w /var/log ]; then
LOG_FILE="/var/log/$PROGRAM.$INSTANCE_ID.log"
elif ([ "$HOME" != "" ] && [ -w "$HOME" ]); then
LOG_FILE="$HOME/$PROGRAM.$INSTANCE_ID.log"
else
LOG_FILE="./$PROGRAM.$INSTANCE_ID.log"
fi
else
LOG_FILE="$LOGFILE"
fi
if [ ! -w "$(dirname $LOG_FILE)" ]; then
echo "Cannot write to log [$(dirname $LOG_FILE)]."
else
Logger "Script begin, logging to [$LOG_FILE]." "DEBUG"
fi
if [ "$IS_STABLE" != "yes" ]; then
Logger "This is an unstable dev build [$PROGRAM_BUILD]. Please use with caution." "WARN"
fi
GetLocalOS
InitLocalOSDependingSettings
PreInit
Init
CheckEnvironment
PostInit
CheckCurrentConfig
CheckCurrentConfigAll
DATE=$(date)
Logger "-------------------------------------------------------------" "NOTICE"
Logger "$DRY_WARNING$DATE - $PROGRAM $PROGRAM_VERSION script begin." "ALWAYS"
Logger "-------------------------------------------------------------" "NOTICE"
Logger "Sync task [$INSTANCE_ID] launched as $LOCAL_USER@$LOCAL_HOST (PID $SCRIPT_PID)" "NOTICE"
OnChangesHelper

921
dev/ofunctions.sh Executable file → Normal file

File diff suppressed because it is too large Load Diff

View File

@ -4,7 +4,5 @@
#SC1091 = not following source #SC1091 = not following source
#SC2086 = quoting errors (shellcheck is way too picky about quoting) #SC2086 = quoting errors (shellcheck is way too picky about quoting)
#SC2120 = only for debug version #SC2120 = only for debug version
#SC2034 = unused variabled (can be ignored in ofunctions.sh)
#SC2068 = bad array usage (can be ignored in ofunctions.sh)
shellcheck -e SC1090,SC1091,SC2086,SC2119,SC2120 $@ shellcheck -e SC1090,SC1091,SC2086,SC2119,SC2120 $1

View File

@ -2,10 +2,9 @@
###### osync - Rsync based two way sync engine with fault tolerance ###### osync - Rsync based two way sync engine with fault tolerance
###### (C) 2013-2016 by Orsiris de Jong (www.netpower.fr) ###### (C) 2013-2016 by Orsiris de Jong (www.netpower.fr)
###### osync v1.1x / v1.2x config file rev 2017060501
## ---------- GENERAL OPTIONS
[GENERAL]
CONFIG_FILE_REVISION=1.3.0
## Sync job identification ## Sync job identification
INSTANCE_ID="local" INSTANCE_ID="local"
@ -28,7 +27,7 @@ SSH_PASSWORD_FILE=""
_REMOTE_TOKEN=SomeAlphaNumericToken9 _REMOTE_TOKEN=SomeAlphaNumericToken9
## Create sync directories if they do not exist ## Create sync directories if they do not exist
CREATE_DIRS=false CREATE_DIRS=no
## Log file location. Leaving this empty will create a logfile at /var/log/osync_version_SYNC_ID.log (or current directory if /var/log doesn't exist) ## Log file location. Leaving this empty will create a logfile at /var/log/osync_version_SYNC_ID.log (or current directory if /var/log doesn't exist)
LOGFILE="" LOGFILE=""
@ -40,7 +39,7 @@ MINIMUM_SPACE=10240
BANDWIDTH=0 BANDWIDTH=0
## If enabled, synchronization on remote system will be processed as superuser. See documentation for /etc/sudoers file configuration. ## If enabled, synchronization on remote system will be processed as superuser. See documentation for /etc/sudoers file configuration.
SUDO_EXEC=false SUDO_EXEC=no
## Paranoia option. Don't change this unless you read the documentation. ## Paranoia option. Don't change this unless you read the documentation.
RSYNC_EXECUTABLE=rsync RSYNC_EXECUTABLE=rsync
## Remote rsync executable path. Leave this empty in most cases ## Remote rsync executable path. Leave this empty in most cases
@ -65,25 +64,23 @@ RSYNC_EXCLUDE_FROM=""
## List elements separator char. You may set an alternative separator char for your directories lists above. ## List elements separator char. You may set an alternative separator char for your directories lists above.
PATH_SEPARATOR_CHAR=";" PATH_SEPARATOR_CHAR=";"
[REMOTE_OPTIONS] ## ---------- REMOTE SYNC OPTIONS
## ssh compression should be used unless your remote connection is good enough (LAN) ## ssh compression should be used unless your remote connection is good enough (LAN)
SSH_COMPRESSION=true SSH_COMPRESSION=yes
## Ignore ssh known hosts. DANGER WILL ROBINSON DANGER ! This can lead to security issues. Only enable this if you know what you're doing. ## Ignore ssh known hosts. DANGER WILL ROBINSON DANGER ! This can lead to security issues. Only enable this if you know what you're doing.
SSH_IGNORE_KNOWN_HOSTS=false SSH_IGNORE_KNOWN_HOSTS=no
SSH_CONTROLMASTER=false
## Check for connectivity to remote host before launching remote sync task. Be sure the hosts responds to ping. Failing to ping will stop sync. ## Check for connectivity to remote host before launching remote sync task. Be sure the hosts responds to ping. Failing to ping will stop sync.
REMOTE_HOST_PING=false REMOTE_HOST_PING=no
## Check for internet access by pinging one or more 3rd party hosts before remote sync task. Leave empty if you don't want this check to be be performed. Failing to ping will stop sync. ## Check for internet access by pinging one or more 3rd party hosts before remote sync task. Leave empty if you don't want this check to be be performed. Failing to ping will stop sync.
## If you use this function, you should set more than one 3rd party host, and be sure you can ping them. ## If you use this function, you should set more than one 3rd party host, and be sure you can ping them.
## Be aware some DNS like opendns redirect false hostnames. Also, this adds an extra execution time of a bit less than a minute. ## Be aware some DNS like opendns redirect false hostnames. Also, this adds an extra execution time of a bit less than a minute.
REMOTE_3RD_PARTY_HOSTS="www.kernel.org www.google.com" REMOTE_3RD_PARTY_HOSTS="www.kernel.org www.google.com"
[MISC_OPTIONS] ## ---------- MISC OPTIONS
## Optional arguments passed to rsync executable. The following are already managed by the program and shoul never be passed here ## Optional arguments passed to rsync executable. The following are already managed by the program and shoul never be passed here
## -r -l -p -t -g -o -D -E - u- i- n --executability -A -X -L -K -H -8 -zz skip-compress checksum bwlimit partial partial-dir no-whole-file whole-file backup backup-dir suffix ## -r -l -p -t -g -o -D -E - u- i- n --executability -A -X -L -K -H -8 -zz skip-compress checksum bwlimit partial partial-dir no-whole-file whole-file backup backup-dir suffix
@ -91,27 +88,27 @@ REMOTE_3RD_PARTY_HOSTS="www.kernel.org www.google.com"
RSYNC_OPTIONAL_ARGS="" RSYNC_OPTIONAL_ARGS=""
## Preserve basic linux permissions ## Preserve basic linux permissions
PRESERVE_PERMISSIONS=true PRESERVE_PERMISSIONS=yes
PRESERVE_OWNER=true PRESERVE_OWNER=yes
PRESERVE_GROUP=true PRESERVE_GROUP=yes
## On MACOS X, does not work and will be ignored ## On MACOS X, does not work and will be ignored
PRESERVE_EXECUTABILITY=true PRESERVE_EXECUTABILITY=yes
## Preserve ACLS. Make sure source and target FS can manage same ACLs or you'll get loads of errors. ## Preserve ACLS. Make sure source and target FS can manage same ACLs or you'll get loads of errors.
PRESERVE_ACL=false PRESERVE_ACL=yes
## Preserve Xattr. Make sure source and target FS can manage same Xattrs or you'll get loads of errors. ## Preserve Xattr. Make sure source and target FS can manage same Xattrs or you'll get loads of errors.
PRESERVE_XATTR=false PRESERVE_XATTR=yes
## Transforms symlinks into referent files/dirs ## Transforms symlinks into referent files/dirs
COPY_SYMLINKS=false COPY_SYMLINKS=no
## Treat symlinked dirs as dirs. CAUTION: This also follows symlinks outside of the replica root. ## Treat symlinked dirs as dirs. CAUTION: This also follows symlinks outside of the replica root.
KEEP_DIRLINKS=false KEEP_DIRLINKS=no
## Preserve hard links. Make sure source and target FS can manage hard links or you will lose them. ## Preserve hard links. Make sure source and target FS can manage hard links or you will lose them.
PRESERVE_HARDLINKS=false PRESERVE_HARDLINKS=no
## Do a full checksum on all files that have identical sizes, they are checksummed to see if they actually are identical. This can take a long time. ## Do a full checksum on all files that have identical sizes, they are checksummed to see if they actually are identical. This can take a long time.
CHECKSUM=false CHECKSUM=no
## Let RSYNC compress file transfers. Do not use this if both initator and target replicas are on local system. Also, do not use this if you already enabled SSH compression. ## Let RSYNC compress file transfers. Do not use this if both initator and target replicas are on local system. Also, do not use this if you already enabled SSH compression.
RSYNC_COMPRESS=true RSYNC_COMPRESS=yes
## Maximum execution time (in seconds) for sync process. Set these values zero will disable max execution times. ## Maximum execution time (in seconds) for sync process. Set these values zero will disable max execution times.
## Soft exec time only generates a warning. Hard exec time will generate a warning and stop sync process. ## Soft exec time only generates a warning. Hard exec time will generate a warning and stop sync process.
@ -128,45 +125,45 @@ MIN_WAIT=60
## Use 0 to wait indefinitely. ## Use 0 to wait indefinitely.
MAX_WAIT=7200 MAX_WAIT=7200
[BACKUP_DELETE_OPTIONS] ## ---------- BACKUP AND DELETION OPTIONS
## Log a list of conflictual files ## Log a list of conflictual files
LOG_CONFLICTS=true LOG_CONFLICTS=yes
## Send an email when conflictual files are found (implies LOG_CONFLICTS) ## Send an email when conflictual files are found (implies LOG_CONFLICTS)
ALERT_CONFLICTS=false ALERT_CONFLICTS=no
## Enabling this option will keep a backup of a file on the target replica if it gets updated from the source replica. Backups will be made to .osync_workdir/backups ## Enabling this option will keep a backup of a file on the target replica if it gets updated from the source replica. Backups will be made to .osync_workdir/backups
CONFLICT_BACKUP=true CONFLICT_BACKUP=yes
## Keep multiple backup versions of the same file. Warning, This can be very space consuming. ## Keep multiple backup versions of the same file. Warning, This can be very space consuming.
CONFLICT_BACKUP_MULTIPLE=false CONFLICT_BACKUP_MULTIPLE=no
## Osync will clean backup files after a given number of days. Setting this to 0 will disable cleaning and keep backups forever. Warning: This can be very space consuming. ## Osync will clean backup files after a given number of days. Setting this to 0 will disable cleaning and keep backups forever. Warning: This can be very space consuming.
CONFLICT_BACKUP_DAYS=30 CONFLICT_BACKUP_DAYS=30
## If the same file exists on both replicas, newer version will be synced. However, if both files have the same timestamp but differ, CONFILCT_PREVALANCE sets winner replica. ## If the same file exists on both replicas, newer version will be synced. However, if both files have the same timestamp but differ, CONFILCT_PREVALANCE sets winner replica.
CONFLICT_PREVALANCE=initiator CONFLICT_PREVALANCE=initiator
## On deletion propagation to the target replica, a backup of the deleted files can be kept. Deletions will be kept in .osync_workdir/deleted ## On deletion propagation to the target replica, a backup of the deleted files can be kept. Deletions will be kept in .osync_workdir/deleted
SOFT_DELETE=true SOFT_DELETE=yes
## Osync will clean deleted files after a given number of days. Setting this to 0 will disable cleaning and keep deleted files forever. Warning: This can be very space consuming. ## Osync will clean deleted files after a given number of days. Setting this to 0 will disable cleaning and keep deleted files forever. Warning: This can be very space consuming.
SOFT_DELETE_DAYS=30 SOFT_DELETE_DAYS=30
## Optional deletion skip on replicas. Valid values are "initiator", "target", or "initiator,target" ## Optional deletion skip on replicas. Valid values are "initiator", "target", or "initiator,target"
SKIP_DELETION= SKIP_DELETION=
[RESUME_OPTIONS] ## ---------- RESUME OPTIONS
## Try to resume an aborted sync task ## Try to resume an aborted sync task
RESUME_SYNC=true RESUME_SYNC=yes
## Number maximum resume tries before initiating a fresh sync. ## Number maximum resume tries before initiating a fresh sync.
RESUME_TRY=2 RESUME_TRY=2
## When a pidlock exists on slave replica that does not correspond to the initiator's instance-id, force pidlock removal. Be careful with this option if you have multiple initiators. ## When a pidlock exists on slave replica that does not correspond to the initiator's instance-id, force pidlock removal. Be careful with this option if you have multiple initiators.
FORCE_STRANGER_LOCK_RESUME=false FORCE_STRANGER_LOCK_RESUME=no
## Keep partial uploads that can be resumed on next run, experimental feature ## Keep partial uploads that can be resumed on next run, experimental feature
PARTIAL=false PARTIAL=no
## Use delta copy algortithm (usefull when local paths are network drives), defaults to yes ## Use delta copy algortithm (usefull when local paths are network drives), defaults to yes
DELTA_COPIES=true DELTA_COPIES=yes
[ALERT_OPTIONS] ## ---------- ALERT OPTIONS
## List of alert mails separated by spaces ## List of alert mails separated by spaces
## Most Unix systems (including Win10 bash) have mail support out of the box ## Most Unix systems (including Win10 bash) have mail support out of the box
@ -190,7 +187,7 @@ SMTP_ENCRYPTION=none
SMTP_USER= SMTP_USER=
SMTP_PASSWORD= SMTP_PASSWORD=
[EXECUTION_HOOKS] ## ---------- EXECUTION HOOKS
## Commands can will be run before and / or after sync process (remote execution will only happen if REMOTE_OPERATION is set). ## Commands can will be run before and / or after sync process (remote execution will only happen if REMOTE_OPERATION is set).
LOCAL_RUN_BEFORE_CMD="" LOCAL_RUN_BEFORE_CMD=""
@ -204,7 +201,7 @@ MAX_EXEC_TIME_PER_CMD_BEFORE=0
MAX_EXEC_TIME_PER_CMD_AFTER=0 MAX_EXEC_TIME_PER_CMD_AFTER=0
## Stops osync execution if one of the above commands fail ## Stops osync execution if one of the above commands fail
STOP_ON_CMD_ERROR=true STOP_ON_CMD_ERROR=yes
## Run local and remote after sync commands even on failure ## Run local and remote after sync commands even on failure
RUN_AFTER_CMD_ON_ERROR=false RUN_AFTER_CMD_ON_ERROR=no

View File

@ -2,10 +2,9 @@
###### osync - Rsync based two way sync engine with fault tolerance ###### osync - Rsync based two way sync engine with fault tolerance
###### (C) 2013-2016 by Orsiris de Jong (www.netpower.fr) ###### (C) 2013-2016 by Orsiris de Jong (www.netpower.fr)
###### osync v1.1x / v1.2x config file rev 2017060601
## ---------- GENERAL OPTIONS
[GENERAL]
CONFIG_FILE_REVISION=1.3.0
## Sync job identification ## Sync job identification
INSTANCE_ID="remote" INSTANCE_ID="remote"
@ -16,10 +15,10 @@ INITIATOR_SYNC_DIR="${HOME}/osync-tests/initiator"
## Target is the system osync synchronizes to (can be the same system as the initiator in case of local sync tasks). The target directory can be a local or remote path. ## Target is the system osync synchronizes to (can be the same system as the initiator in case of local sync tasks). The target directory can be a local or remote path.
#TARGET_SYNC_DIR="${HOME}/osync-tests/target" #TARGET_SYNC_DIR="${HOME}/osync-tests/target"
TARGET_SYNC_DIR="ssh://root@localhost:44999/${HOME}/osync-tests/target" TARGET_SYNC_DIR="ssh://root@localhost:49999/${HOME}/osync-tests/target"
## If the target system is remote, you can specify a RSA key (please use full path). If not defined, the default ~/.ssh/id_rsa will be used. See documentation for further information. ## If the target system is remote, you can specify a RSA key (please use full path). If not defined, the default ~/.ssh/id_rsa will be used. See documentation for further information.
SSH_RSA_PRIVATE_KEY="${HOME}/.ssh/id_rsa_local_osync_tests" SSH_RSA_PRIVATE_KEY="${HOME}/.ssh/id_rsa_local"
## Alternatively, you may specify an SSH password file (less secure). Needs sshpass utility installed. ## Alternatively, you may specify an SSH password file (less secure). Needs sshpass utility installed.
SSH_PASSWORD_FILE="" SSH_PASSWORD_FILE=""
@ -28,7 +27,7 @@ SSH_PASSWORD_FILE=""
_REMOTE_TOKEN=SomeAlphaNumericToken9 _REMOTE_TOKEN=SomeAlphaNumericToken9
## Create sync directories if they do not exist ## Create sync directories if they do not exist
CREATE_DIRS=false CREATE_DIRS=no
## Log file location. Leaving this empty will create a logfile at /var/log/osync_version_SYNC_ID.log (or current directory if /var/log doesn't exist) ## Log file location. Leaving this empty will create a logfile at /var/log/osync_version_SYNC_ID.log (or current directory if /var/log doesn't exist)
LOGFILE="" LOGFILE=""
@ -40,7 +39,7 @@ MINIMUM_SPACE=10240
BANDWIDTH=0 BANDWIDTH=0
## If enabled, synchronization on remote system will be processed as superuser. See documentation for /etc/sudoers file configuration. ## If enabled, synchronization on remote system will be processed as superuser. See documentation for /etc/sudoers file configuration.
SUDO_EXEC=false SUDO_EXEC=no
## Paranoia option. Don't change this unless you read the documentation. ## Paranoia option. Don't change this unless you read the documentation.
RSYNC_EXECUTABLE=rsync RSYNC_EXECUTABLE=rsync
## Remote rsync executable path. Leave this empty in most cases ## Remote rsync executable path. Leave this empty in most cases
@ -65,25 +64,23 @@ RSYNC_EXCLUDE_FROM=""
## List elements separator char. You may set an alternative separator char for your directories lists above. ## List elements separator char. You may set an alternative separator char for your directories lists above.
PATH_SEPARATOR_CHAR=";" PATH_SEPARATOR_CHAR=";"
[REMOTE_OPTIONS] ## ---------- REMOTE SYNC OPTIONS
## ssh compression should be used unless your remote connection is good enough (LAN) ## ssh compression should be used unless your remote connection is good enough (LAN)
SSH_COMPRESSION=true SSH_COMPRESSION=yes
## Ignore ssh known hosts. DANGER WILL ROBINSON DANGER ! This can lead to security issues. Only enable this if you know what you're doing. ## Ignore ssh known hosts. DANGER WILL ROBINSON DANGER ! This can lead to security issues. Only enable this if you know what you're doing.
SSH_IGNORE_KNOWN_HOSTS=false SSH_IGNORE_KNOWN_HOSTS=no
SSH_CONTROLMASTER=false
## Check for connectivity to remote host before launching remote sync task. Be sure the hosts responds to ping. Failing to ping will stop sync. ## Check for connectivity to remote host before launching remote sync task. Be sure the hosts responds to ping. Failing to ping will stop sync.
REMOTE_HOST_PING=true REMOTE_HOST_PING=yes
## Check for internet access by pinging one or more 3rd party hosts before remote sync task. Leave empty if you don't want this check to be be performed. Failing to ping will stop sync. ## Check for internet access by pinging one or more 3rd party hosts before remote sync task. Leave empty if you don't want this check to be be performed. Failing to ping will stop sync.
## If you use this function, you should set more than one 3rd party host, and be sure you can ping them. ## If you use this function, you should set more than one 3rd party host, and be sure you can ping them.
## Be aware some DNS like opendns redirect false hostnames. Also, this adds an extra execution time of a bit less than a minute. ## Be aware some DNS like opendns redirect false hostnames. Also, this adds an extra execution time of a bit less than a minute.
REMOTE_3RD_PARTY_HOSTS="www.kernel.org www.google.com" REMOTE_3RD_PARTY_HOSTS="www.kernel.org www.google.com"
[MISC_OPTIONS] ## ---------- MISC OPTIONS
## Optional arguments passed to rsync executable. The following are already managed by the program and shoul never be passed here ## Optional arguments passed to rsync executable. The following are already managed by the program and shoul never be passed here
## -r -l -p -t -g -o -D -E - u- i- n --executability -A -X -L -K -H -8 -zz skip-compress checksum bwlimit partial partial-dir no-whole-file whole-file backup backup-dir suffix ## -r -l -p -t -g -o -D -E - u- i- n --executability -A -X -L -K -H -8 -zz skip-compress checksum bwlimit partial partial-dir no-whole-file whole-file backup backup-dir suffix
@ -91,27 +88,27 @@ REMOTE_3RD_PARTY_HOSTS="www.kernel.org www.google.com"
RSYNC_OPTIONAL_ARGS="" RSYNC_OPTIONAL_ARGS=""
## Preserve basic linux permissions ## Preserve basic linux permissions
PRESERVE_PERMISSIONS=true PRESERVE_PERMISSIONS=yes
PRESERVE_OWNER=true PRESERVE_OWNER=yes
PRESERVE_GROUP=true PRESERVE_GROUP=yes
## On MACOS X, does not work and will be ignored ## On MACOS X, does not work and will be ignored
PRESERVE_EXECUTABILITY=true PRESERVE_EXECUTABILITY=yes
## Preserve ACLS. Make sure source and target FS can manage same ACLs or you'll get loads of errors. ## Preserve ACLS. Make sure source and target FS can manage same ACLs or you'll get loads of errors.
PRESERVE_ACL=false PRESERVE_ACL=yes
## Preserve Xattr. Make sure source and target FS can manage same Xattrs or you'll get loads of errors. ## Preserve Xattr. Make sure source and target FS can manage same Xattrs or you'll get loads of errors.
PRESERVE_XATTR=false PRESERVE_XATTR=yes
## Transforms symlinks into referent files/dirs ## Transforms symlinks into referent files/dirs
COPY_SYMLINKS=false COPY_SYMLINKS=no
## Treat symlinked dirs as dirs. CAUTION: This also follows symlinks outside of the replica root. ## Treat symlinked dirs as dirs. CAUTION: This also follows symlinks outside of the replica root.
KEEP_DIRLINKS=false KEEP_DIRLINKS=no
## Preserve hard links. Make sure source and target FS can manage hard links or you will lose them. ## Preserve hard links. Make sure source and target FS can manage hard links or you will lose them.
PRESERVE_HARDLINKS=false PRESERVE_HARDLINKS=no
## Do a full checksum on all files that have identical sizes, they are checksummed to see if they actually are identical. This can take a long time. ## Do a full checksum on all files that have identical sizes, they are checksummed to see if they actually are identical. This can take a long time.
CHECKSUM=false CHECKSUM=no
## Let RSYNC compress file transfers. Do not use this if both initator and target replicas are on local system. Also, do not use this if you already enabled SSH compression. ## Let RSYNC compress file transfers. Do not use this if both initator and target replicas are on local system. Also, do not use this if you already enabled SSH compression.
RSYNC_COMPRESS=true RSYNC_COMPRESS=yes
## Maximum execution time (in seconds) for sync process. Set these values zero will disable max execution times. ## Maximum execution time (in seconds) for sync process. Set these values zero will disable max execution times.
## Soft exec time only generates a warning. Hard exec time will generate a warning and stop sync process. ## Soft exec time only generates a warning. Hard exec time will generate a warning and stop sync process.
@ -128,45 +125,45 @@ MIN_WAIT=60
## Use 0 to wait indefinitely. ## Use 0 to wait indefinitely.
MAX_WAIT=7200 MAX_WAIT=7200
[BACKUP_DELETE_OPTIONS] ## ---------- BACKUP AND DELETION OPTIONS
## Log a list of conflictual files ## Log a list of conflictual files
LOG_CONFLICTS=true LOG_CONFLICTS=yes
## Send an email when conflictual files are found (implies LOG_CONFLICTS) ## Send an email when conflictual files are found (implies LOG_CONFLICTS)
ALERT_CONFLICTS=false ALERT_CONFLICTS=no
## Enabling this option will keep a backup of a file on the target replica if it gets updated from the source replica. Backups will be made to .osync_workdir/backups ## Enabling this option will keep a backup of a file on the target replica if it gets updated from the source replica. Backups will be made to .osync_workdir/backups
CONFLICT_BACKUP=true CONFLICT_BACKUP=yes
## Keep multiple backup versions of the same file. Warning, This can be very space consuming. ## Keep multiple backup versions of the same file. Warning, This can be very space consuming.
CONFLICT_BACKUP_MULTIPLE=false CONFLICT_BACKUP_MULTIPLE=no
## Osync will clean backup files after a given number of days. Setting this to 0 will disable cleaning and keep backups forever. Warning: This can be very space consuming. ## Osync will clean backup files after a given number of days. Setting this to 0 will disable cleaning and keep backups forever. Warning: This can be very space consuming.
CONFLICT_BACKUP_DAYS=30 CONFLICT_BACKUP_DAYS=30
## If the same file exists on both replicas, newer version will be synced. However, if both files have the same timestamp but differ, CONFILCT_PREVALANCE sets winner replica. ## If the same file exists on both replicas, newer version will be synced. However, if both files have the same timestamp but differ, CONFILCT_PREVALANCE sets winner replica.
CONFLICT_PREVALANCE=initiator CONFLICT_PREVALANCE=initiator
## On deletion propagation to the target replica, a backup of the deleted files can be kept. Deletions will be kept in .osync_workdir/deleted ## On deletion propagation to the target replica, a backup of the deleted files can be kept. Deletions will be kept in .osync_workdir/deleted
SOFT_DELETE=true SOFT_DELETE=yes
## Osync will clean deleted files after a given number of days. Setting this to 0 will disable cleaning and keep deleted files forever. Warning: This can be very space consuming. ## Osync will clean deleted files after a given number of days. Setting this to 0 will disable cleaning and keep deleted files forever. Warning: This can be very space consuming.
SOFT_DELETE_DAYS=30 SOFT_DELETE_DAYS=30
## Optional deletion skip on replicas. Valid values are "initiator", "target", or "initiator,target" ## Optional deletion skip on replicas. Valid values are "initiator", "target", or "initiator,target"
SKIP_DELETION= SKIP_DELETION=
[RESUME_OPTIONS] ## ---------- RESUME OPTIONS
## Try to resume an aborted sync task ## Try to resume an aborted sync task
RESUME_SYNC=true RESUME_SYNC=yes
## Number maximum resume tries before initiating a fresh sync. ## Number maximum resume tries before initiating a fresh sync.
RESUME_TRY=2 RESUME_TRY=2
## When a pidlock exists on slave replica that does not correspond to the initiator's instance-id, force pidlock removal. Be careful with this option if you have multiple initiators. ## When a pidlock exists on slave replica that does not correspond to the initiator's instance-id, force pidlock removal. Be careful with this option if you have multiple initiators.
FORCE_STRANGER_LOCK_RESUME=false FORCE_STRANGER_LOCK_RESUME=no
## Keep partial uploads that can be resumed on next run, experimental feature ## Keep partial uploads that can be resumed on next run, experimental feature
PARTIAL=false PARTIAL=no
## Use delta copy algortithm (usefull when local paths are network drives), defaults to yes ## Use delta copy algortithm (usefull when local paths are network drives), defaults to yes
DELTA_COPIES=true DELTA_COPIES=yes
[ALERT_OPTIONS] ## ---------- ALERT OPTIONS
## List of alert mails separated by spaces ## List of alert mails separated by spaces
## Most Unix systems (including Win10 bash) have mail support out of the box ## Most Unix systems (including Win10 bash) have mail support out of the box
@ -190,7 +187,7 @@ SMTP_ENCRYPTION=none
SMTP_USER= SMTP_USER=
SMTP_PASSWORD= SMTP_PASSWORD=
[EXECUTION_HOOKS] ## ---------- EXECUTION HOOKS
## Commands can will be run before and / or after sync process (remote execution will only happen if REMOTE_OPERATION is set). ## Commands can will be run before and / or after sync process (remote execution will only happen if REMOTE_OPERATION is set).
LOCAL_RUN_BEFORE_CMD="" LOCAL_RUN_BEFORE_CMD=""
@ -204,7 +201,7 @@ MAX_EXEC_TIME_PER_CMD_BEFORE=0
MAX_EXEC_TIME_PER_CMD_AFTER=0 MAX_EXEC_TIME_PER_CMD_AFTER=0
## Stops osync execution if one of the above commands fail ## Stops osync execution if one of the above commands fail
STOP_ON_CMD_ERROR=true STOP_ON_CMD_ERROR=yes
## Run local and remote after sync commands even on failure ## Run local and remote after sync commands even on failure
RUN_AFTER_CMD_ON_ERROR=false RUN_AFTER_CMD_ON_ERROR=no

View File

@ -1,18 +1,12 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# osync test suite 2023061401
# Allows the following environment variables
# RUNNING_ON_GITHUB_ACTIONS=[true|false]
# SSH_PORT=22
# SKIP_REMOTE=[true|false]
## On Mac OSX, this needs to be run as root in order to use sudo without password ## On Mac OSX, this needs to be run as root in order to use sudo without password
## From current terminal run sudo -s in order to get a new terminal as root ## From current terminal run sudo -s in order to get a new terminal as root
## On CYGWIN / MSYS, ACL and extended attributes aren't supported ## On CYGWIN / MSYS, ACL and extended attributes aren't supported
# osync test suite 2018070206
# 4 tests: # 4 tests:
# quicklocal # quicklocal
# quickremote (with ssh_filter.sh) # quickremote (with ssh_filter.sh)
@ -22,7 +16,6 @@
# for each test # for each test
# files with spaces, subdirs # files with spaces, subdirs
# largefileset (...large ?) # largefileset (...large ?)
# quickremote test with controlmaster enabled
# exclusions # exclusions
# conflict resolution initiator with backups / multiple backups # conflict resolution initiator with backups / multiple backups
# conflict resolution target with backups / multiple backups # conflict resolution target with backups / multiple backups
@ -43,30 +36,18 @@
# setfacl needs double ':' to be compatible with both linux and BSD # setfacl needs double ':' to be compatible with both linux and BSD
# setfacl -m o::rwx file # setfacl -m o::rwx file
# On Windows 10 bash, we need to create host SSH keys first with ssh-keygen -A
# Then start ssh with service ssh start
# TODO, use copies of config file on each test function
if [ "$SKIP_REMOTE" = "" ]; then
SKIP_REMOTE=false
REMOTE_USER=root
fi
homedir=$(eval echo ~${REMOTE_USER})
# drupal servers are often unreachable for whetever reason or give 0 bytes files # drupal servers are often unreachable for whetever reason or give 0 bytes files
#LARGE_FILESET_URL="http://ftp.drupal.org/files/projects/drupal-8.2.2.tar.gz" #LARGE_FILESET_URL="http://ftp.drupal.org/files/projects/drupal-8.2.2.tar.gz"
LARGE_FILESET_URL="https://ftp.drupal.org/files/projects/drupal-11.0.10.tar.gz" LARGE_FILESET_URL="http://www.netpower.fr/sites/default/files/osync-test-files.tar.gz"
# Fakeroot for install / uninstall and test of executables
FAKEROOT="${HOME}/osync_test_install"
OSYNC_DIR="$(pwd)" OSYNC_DIR="$(pwd)"
OSYNC_DIR=${OSYNC_DIR%%/dev*} OSYNC_DIR=${OSYNC_DIR%%/dev*}
DEV_DIR="$OSYNC_DIR/dev" DEV_DIR="$OSYNC_DIR/dev"
TESTS_DIR="$DEV_DIR/tests" TESTS_DIR="$DEV_DIR/tests"
# Fakeroot for install / uninstall and test of executables
FAKEROOT="${homedir}/osync_test_install"
CONF_DIR="$TESTS_DIR/conf" CONF_DIR="$TESTS_DIR/conf"
LOCAL_CONF="local.conf" LOCAL_CONF="local.conf"
REMOTE_CONF="remote.conf" REMOTE_CONF="remote.conf"
@ -75,11 +56,11 @@ TMP_OLD_CONF="tmp.old.conf"
OSYNC_EXECUTABLE="$FAKEROOT/usr/local/bin/osync.sh" OSYNC_EXECUTABLE="$FAKEROOT/usr/local/bin/osync.sh"
OSYNC_DEV_EXECUTABLE="dev/n_osync.sh" OSYNC_DEV_EXECUTABLE="dev/n_osync.sh"
OSYNC_UPGRADE="upgrade-v1.0x-v1.3x.sh" OSYNC_UPGRADE="upgrade-v1.0x-v1.2x.sh"
TMP_FILE="$DEV_DIR/tmp" TMP_FILE="$DEV_DIR/tmp"
OSYNC_TESTS_DIR="${homedir}/osync-tests" OSYNC_TESTS_DIR="${HOME}/osync-tests"
INITIATOR_DIR="$OSYNC_TESTS_DIR/initiator" INITIATOR_DIR="$OSYNC_TESTS_DIR/initiator"
TARGET_DIR="$OSYNC_TESTS_DIR/target" TARGET_DIR="$OSYNC_TESTS_DIR/target"
OSYNC_WORKDIR=".osync_workdir" OSYNC_WORKDIR=".osync_workdir"
@ -92,56 +73,30 @@ OSYNC_VERSION=1.x.y
OSYNC_MIN_VERSION=x OSYNC_MIN_VERSION=x
OSYNC_IS_STABLE=maybe OSYNC_IS_STABLE=maybe
PRIVKEY_NAME="id_rsa_local_osync_tests"
PUBKEY_NAME="${PRIVKEY_NAME}.pub"
function SetupSSH { function SetupSSH {
echo "Setting up an ssh key to ${homedir}/.ssh/${PRIVKEY_NAME}" echo -e 'y\n'| ssh-keygen -t rsa -b 2048 -N "" -f "${HOME}/.ssh/id_rsa_local"
echo -e 'y\n'| ssh-keygen -t rsa -b 2048 -N "" -f "${homedir}/.ssh/${PRIVKEY_NAME}" if ! grep "$(cat ${HOME}/.ssh/id_rsa_local.pub)" "${HOME}/.ssh/authorized_keys"; then
echo "from=\"*\",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command=\"$FAKEROOT/usr/local/bin/ssh_filter.sh SomeAlphaNumericToken9\" $(cat ${HOME}/.ssh/id_rsa_local.pub)" >> "${HOME}/.ssh/authorized_keys"
SSH_AUTH_LINE="from=\"*\",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command=\"$FAKEROOT/usr/local/bin/ssh_filter.sh SomeAlphaNumericToken9\" $(cat ${homedir}/.ssh/${PUBKEY_NAME})"
echo "ls -alh ${homedir}"
ls -alh "${homedir}"
echo "ls -alh ${homedir}/.ssh"
ls -alh "${homedir}/.ssh"
if [ -f "${homedir}/.ssh/authorized_keys" ]; then
if ! grep "$(cat ${homedir}/.ssh/${PUBKEY_NAME})" "${homedir}/.ssh/authorized_keys"; then
echo "Adding auth line in authorized_keys file ${homedir}/.ssh/authorized_keys"
echo "$SSH_AUTH_LINE" >> "${homedir}/.ssh/authorized_keys"
fi fi
else chmod 600 "${HOME}/.ssh/authorized_keys"
echo "Creating authorized_keys file ${homedir}/.ssh/authorized_keys"
echo "$SSH_AUTH_LINE" >> "${homedir}/.ssh/authorized_keys"
fi
chmod 600 "${homedir}/.ssh/authorized_keys"
# Add localhost to known hosts so self connect works # Add localhost to known hosts so self connect works
if [ -z "$(ssh-keygen -F localhost)" ]; then if [ -z "$(ssh-keygen -F localhost)" ]; then
ssh-keyscan -H localhost >> "${homedir}/.ssh/known_hosts" ssh-keyscan -H localhost >> "${HOME}/.ssh/known_hosts"
fi fi
# Update remote conf files with SSH port and file id location # Update remote conf files with SSH port
sed -i.tmp 's#ssh://.*@localhost:[0-9]*/${HOME}/osync-tests/target#ssh://'$REMOTE_USER'@localhost:'$SSH_PORT'/'${homedir}'/osync-tests/target#' "$CONF_DIR/$REMOTE_CONF" sed -i.tmp 's#ssh://.*@localhost:[0-9]*/${HOME}/osync-tests/target#ssh://'$REMOTE_USER'@localhost:'$SSH_PORT'/${HOME}/osync-tests/target#' "$CONF_DIR/$REMOTE_CONF"
sed -i.tmp2 's#SSH_RSA_PRIVATE_KEY="${HOME}/.ssh/id_rsa_local_osync_tests"#SSH_RSA_PRIVATE_KEY="'${homedir}'/.ssh/id_rsa_local_osync_tests"#' "$CONF_DIR/$REMOTE_CONF"
echo "ls -alh ${homedir}/.ssh"
ls -alh "${homedir}/.ssh"
echo "cat ${homedir}/.ssh/authorized_keys"
cat "${homedir}/.ssh/authorized_keys"
echo "###"
echo "END SETUP SSH"
} }
function RemoveSSH { function RemoveSSH {
echo "Now removing SSH keys" local pubkey
if [ -f "${homedir}/.ssh/id_rsa_local_osync_tests" ]; then
echo "Restoring SSH authorized_keys file" if [ -f "${HOME}/.ssh/id_rsa_local" ]; then
sed -i.bak "s|.*$(cat "${homedir}/.ssh/id_rsa_local_osync_tests.pub")||g" "${homedir}/.ssh/authorized_keys"
rm -f "${homedir}/.ssh/{id_rsa_local_osync_tests.pub,id_rsa_local_osync_tests}" pubkey=$(cat "${HOME}/.ssh/id_rsa_local.pub")
sed -i.bak "s|.*$pubkey.*||g" "${HOME}/.ssh/authorized_keys"
rm -f "${HOME}/.ssh/{id_rsa_local.pub,id_rsa_local}"
fi fi
} }
@ -190,27 +145,23 @@ function CreateOldFile () {
} }
function PrepareLocalDirs () { function PrepareLocalDirs () {
if [ -d "$OSYNC_TESTS_DIR" ]; then # Remote dirs are the same as local dirs, so no problem here
rm -rf "$OSYNC_TESTS_DIR" if [ -d "$INITIATOR_DIR" ]; then
rm -rf "$INITIATOR_DIR"
fi fi
mkdir "$OSYNC_TESTS_DIR" mkdir -p "$INITIATOR_DIR"
mkdir "$INITIATOR_DIR"
mkdir "$TARGET_DIR" if [ -d "$TARGET_DIR" ]; then
rm -rf "$TARGET_DIR"
fi
mkdir -p "$TARGET_DIR"
} }
function oneTimeSetUp () { function oneTimeSetUp () {
START_TIME=$SECONDS START_TIME=$SECONDS
#echo "Running forced merge" mkdir --parents "$FAKEROOT"
#cd "${DEV_DIR}"
#$SUDO_CMD ./merge.sh osync
echo "Setting security for files"
$SUDO_CMD find ${OSYNC_DIR} -exec chmod 755 {} \+
echo "Show content of osync dir"
ls -alh ${OSYNC_DIR}
echo "Running install.sh from ${OSYNC_DIR}"
$SUDO_CMD ${OSYNC_DIR}/install.sh --no-stats --prefix="${FAKEROOT}"
source "$DEV_DIR/ofunctions.sh" source "$DEV_DIR/ofunctions.sh"
# Fix default umask because of ACL test that expects 0022 when creating test files # Fix default umask because of ACL test that expects 0022 when creating test files
@ -221,45 +172,29 @@ function oneTimeSetUp () {
echo "Detected OS: $LOCAL_OS" echo "Detected OS: $LOCAL_OS"
# Set some travis related changes # Set some travis related changes
if [ "$RUNNING_ON_GITHUB_ACTIONS" == true ]; then if [ "$TRAVIS_RUN" == true ]; then
echo "Running with GITHUB ACTIONS settings" echo "Running with travis settings"
#REMOTE_USER="runner" REMOTE_USER="travis"
REMOTE_USER="root" # WIP RHOST_PING="no"
homedir=$(eval echo ~${REMOTE_USER})
RHOST_PING=false
SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "REMOTE_3RD_PARTY_HOSTS" "" SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "REMOTE_3RD_PARTY_HOSTS" ""
SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "REMOTE_HOST_PING" false SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "REMOTE_HOST_PING" "no"
SetConfFileValue "$CONF_DIR/$OLD_CONF" "REMOTE_3RD_PARTY_HOSTS" "" SetConfFileValue "$CONF_DIR/$OLD_CONF" "REMOTE_3RD_PARTY_HOSTS" ""
SetConfFileValue "$CONF_DIR/$OLD_CONF" "REMOTE_HOST_PING" false SetConfFileValue "$CONF_DIR/$OLD_CONF" "REMOTE_HOST_PING" "no"
else else
echo "Running with local settings" echo "Running with local settings"
REMOTE_USER="root" REMOTE_USER="root"
homedir=$(eval echo ~${REMOTE_USER}) RHOST_PING="yes"
RHOST_PING=true
SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "REMOTE_3RD_PARTY_HOSTS" "\"www.kernel.org www.google.com\"" SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "REMOTE_3RD_PARTY_HOSTS" "\"www.kernel.org www.google.com\""
SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "REMOTE_HOST_PING" true SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "REMOTE_HOST_PING" "yes"
SetConfFileValue "$CONF_DIR/$OLD_CONF" "REMOTE_3RD_PARTY_HOSTS" "\"www.kernel.org www.google.com\"" SetConfFileValue "$CONF_DIR/$OLD_CONF" "REMOTE_3RD_PARTY_HOSTS" "\"www.kernel.org www.google.com\""
SetConfFileValue "$CONF_DIR/$OLD_CONF" "REMOTE_HOST_PING" true SetConfFileValue "$CONF_DIR/$OLD_CONF" "REMOTE_HOST_PING" "yes"
fi fi
# Fix test directories for Github actions
SetConfFileValue "$CONF_DIR/$LOCAL_CONF" INITIATOR_SYNC_DIR "\"${homedir}/osync-tests/initiator\""
SetConfFileValue "$CONF_DIR/$LOCAL_CONF" TARGET_SYNC_DIR "\"${homedir}/osync-tests/target\""
SetConfFileValue "$CONF_DIR/$REMOTE_CONF" INITIATOR_SYNC_DIR "\"${homedir}/osync-tests/initiator\""
SetConfFileValue "$CONF_DIR/$OLD_CONF" MASTER_SYNC_DIR "\"${homedir}/osync-tests/initiator\""
SetConfFileValue "$CONF_DIR/$OLD_CONF" SLAVE_SYNC_DIR "\"${homedir}/osync-tests/target\""
# Get default ssh port from env # Get default ssh port from env
if [ "$SSH_PORT" == "" ]; then if [ "$SSH_PORT" == "" ]; then
SSH_PORT=22 SSH_PORT=22
echo "Running with SSH_PORT=${SSH_PORT}"
fi fi
# Setup modes per test # Setup modes per test
@ -269,8 +204,8 @@ function oneTimeSetUp () {
readonly __confRemote=3 readonly __confRemote=3
osyncParameters=() osyncParameters=()
osyncParameters[$__quickLocal]="--initiator=$INITIATOR_DIR --target=$TARGET_DIR --instance-id=quicklocal --non-interactive" osyncParameters[$__quickLocal]="--initiator=$INITIATOR_DIR --target=$TARGET_DIR --instance-id=quicklocal"
osyncParameters[$__confLocal]="$CONF_DIR/$LOCAL_CONF --non-interactive" osyncParameters[$__confLocal]="$CONF_DIR/$LOCAL_CONF"
osyncDaemonParameters=() osyncDaemonParameters=()
@ -280,9 +215,9 @@ function oneTimeSetUp () {
osyncDaemonParameters[$__local]="$CONF_DIR/$LOCAL_CONF --on-changes" osyncDaemonParameters[$__local]="$CONF_DIR/$LOCAL_CONF --on-changes"
# Do not check remote config on msys or cygwin since we don't have a local SSH server # Do not check remote config on msys or cygwin since we don't have a local SSH server
if [ "$LOCAL_OS" != "msys" ] && [ "$LOCAL_OS" != "Cygwin" ] && [ $SKIP_REMOTE != true ]; then if [ "$LOCAL_OS" != "msys" ] && [ "$LOCAL_OS" != "Cygwin" ]; then
osyncParameters[$__quickRemote]="--initiator=$INITIATOR_DIR --target=ssh://localhost:$SSH_PORT/$TARGET_DIR --rsakey=${homedir}/.ssh/id_rsa_local_osync_tests --instance-id=quickremote --remote-token=SomeAlphaNumericToken9 --non-interactive" osyncParameters[$__quickRemote]="--initiator=$INITIATOR_DIR --target=ssh://localhost:$SSH_PORT/$TARGET_DIR --rsakey=${HOME}/.ssh/id_rsa_local --instance-id=quickremote --remote-token=SomeAlphaNumericToken9"
osyncParameters[$__confRemote]="$CONF_DIR/$REMOTE_CONF --non-interactive" osyncParameters[$__confRemote]="$CONF_DIR/$REMOTE_CONF"
osyncDaemonParameters[$__remote]="$CONF_DIR/$REMOTE_CONF --on-changes" osyncDaemonParameters[$__remote]="$CONF_DIR/$REMOTE_CONF --on-changes"
@ -315,14 +250,14 @@ function oneTimeSetUp () {
SetConfFileValue "$CONF_DIR/$LOCAL_CONF" "SKIP_DELETION" "" SetConfFileValue "$CONF_DIR/$LOCAL_CONF" "SKIP_DELETION" ""
SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "SKIP_DELETION" "" SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "SKIP_DELETION" ""
SetConfFileValue "$CONF_DIR/$LOCAL_CONF" "COPY_SYMLINKS" false SetConfFileValue "$CONF_DIR/$LOCAL_CONF" "COPY_SYMLINKS" "no"
SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "COPY_SYMLINKS" false SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "COPY_SYMLINKS" "no"
SetConfFileValue "$CONF_DIR/$LOCAL_CONF" "CONFLICT_BACKUP_MULTIPLE" false SetConfFileValue "$CONF_DIR/$LOCAL_CONF" "CONFLICT_BACKUP_MULTIPLE" "no"
SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "CONFLICT_BACKUP_MULTIPLE" false SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "CONFLICT_BACKUP_MULTIPLE" "no"
SetConfFileValue "$CONF_DIR/$LOCAL_CONF" "FORCE_STRANGER_LOCK_RESUME" false SetConfFileValue "$CONF_DIR/$LOCAL_CONF" "FORCE_STRANGER_LOCK_RESUME" "no"
SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "FORCE_STRANGER_LOCK_RESUME" false SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "FORCE_STRANGER_LOCK_RESUME" "no"
SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "SOFT_MAX_EXEC_TIME" "7200" SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "SOFT_MAX_EXEC_TIME" "7200"
SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "HARD_MAX_EXEC_TIME" "10600" SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "HARD_MAX_EXEC_TIME" "10600"
@ -334,9 +269,7 @@ function oneTimeTearDown () {
# Set osync version stable flag back to origin # Set osync version stable flag back to origin
#SetConfFileValue "$OSYNC_DIR/osync.sh" "IS_STABLE" "$OSYNC_IS_STABLE" #SetConfFileValue "$OSYNC_DIR/osync.sh" "IS_STABLE" "$OSYNC_IS_STABLE"
if [ "$SKIP_REMOTE" != true ]; then
RemoveSSH RemoveSSH
fi
#TODO: uncomment this when dev is done #TODO: uncomment this when dev is done
#rm -rf "$OSYNC_TESTS_DIR" #rm -rf "$OSYNC_TESTS_DIR"
@ -348,7 +281,7 @@ function oneTimeTearDown () {
$SUDO_CMD ./install.sh --remove --no-stats --prefix="$FAKEROOT" $SUDO_CMD ./install.sh --remove --no-stats --prefix="$FAKEROOT"
assertEquals "Uninstall failed" "0" $? assertEquals "Uninstall failed" "0" $?
ELAPSED_TIME=$((SECONDS-START_TIME)) ELAPSED_TIME=$(($SECONDS - $START_TIME))
echo "It took $ELAPSED_TIME seconds to run these tests." echo "It took $ELAPSED_TIME seconds to run these tests."
} }
@ -357,52 +290,12 @@ function setUp () {
rm -rf "$TARGET_DIR" rm -rf "$TARGET_DIR"
} }
function test_SSH {
# Make sure we have SSH on your test server
# This has become kind of tricky on github actions servers
echo "Testing SSH"
failure=false
# Testing as "remote user"
echo "ls -alh ${homedir}/.ssh"
ls -alh "${homedir}/.ssh"
echo "Running SSH test as ${REMOTE_USER}"
# SSH_PORT and SSH_USER are set by oneTimeSetup
$SUDO_CMD ssh -i "${homedir}/.ssh/${PRIVKEY_NAME}" -p $SSH_PORT ${REMOTE_USER}@localhost "env _REMOTE_TOKEN=SomeAlphaNumericToken9 echo \"Remotely:\"; whoami; echo \"TEST OK\""
if [ $? -ne 0 ]; then
echo "SSH test failed"
failure=true
fi
# Testing as current user
#echo "ls -alh ${homedir}/.ssh"
#ls -alh "${homedir}/.ssh"
#echo "Running SSH test as $(whoami)"
#$SUDO_CMD ssh -i "${homedir}/.ssh/${PRIVKEY_NAME}" -p $SSH_PORT $(whoami)@localhost "env _REMOTE_TOKEN=SomeAlphaNumericToken9 echo \"Remotely:\"; whoami; echo \"TEST OK\""
#if [ $? -ne 0 ]; then
# echo "SSH test failed"
# failure=true
#fi
if [ $failure == true ]; then
exit 1 # Try to see if we can abort all tests
assertEquals "Test SSH failed" false $failure
fi
}
# This test has to be done everytime in order for osync executable to be fresh # This test has to be done everytime in order for osync executable to be fresh
function test_Merge () { function test_Merge () {
cd "$DEV_DIR" cd "$DEV_DIR"
./merge.sh osync ./merge.sh osync
assertEquals "Merging code" "0" $? assertEquals "Merging code" "0" $?
#WIP use debug code
alias cp=cp
cp "$DEV_DIR/debug_osync.sh" "$OSYNC_DIR/osync.sh"
cd "$OSYNC_DIR" cd "$OSYNC_DIR"
echo "" echo ""
@ -412,12 +305,12 @@ function test_Merge () {
# Set osync version to stable while testing to avoid warning message # Set osync version to stable while testing to avoid warning message
# Don't use SetConfFileValue here since for whatever reason Travis does not like creating a sed temporary file in $FAKEROOT # Don't use SetConfFileValue here since for whatever reason Travis does not like creating a sed temporary file in $FAKEROOT
if [ "$RUNNING_ON_GITHUB_ACTIONS" == true ]; then if [ "$TRAVIS_RUN" == true ]; then
$SUDO_CMD sed -i.tmp 's/^IS_STABLE=.*/IS_STABLE=true/' "$OSYNC_EXECUTABLE" $SUDO_CMD sed -i.tmp 's/^IS_STABLE=.*/IS_STABLE=yes/' "$OSYNC_EXECUTABLE"
else else
sed -i.tmp 's/^IS_STABLE=.*/IS_STABLE=true/' "$OSYNC_EXECUTABLE" sed -i.tmp 's/^IS_STABLE=.*/IS_STABLE=yes/' "$OSYNC_EXECUTABLE"
fi fi
#SetConfFileValue "$OSYNC_EXECUTABLE" "IS_STABLE" true #SetConfFileValue "$OSYNC_EXECUTABLE" "IS_STABLE" "yes"
assertEquals "Install failed" "0" $? assertEquals "Install failed" "0" $?
@ -441,15 +334,6 @@ function test_LargeFileSet () {
done done
} }
function test_controlMaster () {
cd "$OSYNC_DIR"
PrepareLocalDirs
echo "Running with parameters ${osyncParameters[$__quickRemote]} --ssh-controlmaster"
REMOTE_HOST_PING=$REMOTE_PING $OSYNC_EXECUTABLE ${osyncParameters[$__quickRemote]} --ssh-controlmaster
assertEquals "Running quick remote test with controlmaster enabled." "0" $?
}
function test_Exclusions () { function test_Exclusions () {
# Will sync except php files # Will sync except php files
# RSYNC_EXCLUDE_PATTERN="*.php" is set at runtime for quicksync and in config files for other runs # RSYNC_EXCLUDE_PATTERN="*.php" is set at runtime for quicksync and in config files for other runs
@ -480,9 +364,9 @@ function test_Exclusions () {
function test_Deletetion () { function test_Deletetion () {
local iFile1="$INITIATOR_DIR/ific" local iFile1="$INITIATOR_DIR/ific"
local iFile2="$INITIATOR_DIR/i foc (something)" local iFile2="$INITIATOR_DIR/ifoc"
local tFile1="$TARGET_DIR/tfic" local tFile1="$TARGET_DIR/tfic"
local tFile2="$TARGET_DIR/t foc [nothing]" local tFile2="$TARGET_DIR/tfoc"
for i in "${osyncParameters[@]}"; do for i in "${osyncParameters[@]}"; do
@ -574,7 +458,7 @@ function test_deletion_failure () {
$SUDO_CMD $IMMUTABLE_OFF_CMD "$TARGET_DIR/$FileA" $SUDO_CMD $IMMUTABLE_OFF_CMD "$TARGET_DIR/$FileA"
$SUDO_CMD $IMMUTABLE_OFF_CMD "$INITIATOR_DIR/$FileB" $SUDO_CMD $IMMUTABLE_OFF_CMD "$INITIATOR_DIR/$FileB"
REMOTE_HOST_PING=$RHOST_PING $OSYNC_EXECUTABLE $i --verbose REMOTE_HOST_PING=$RHOST_PING $OSYNC_EXECUTABLE $i
assertEquals "Third deletion run with parameters [$i]." "0" $? assertEquals "Third deletion run with parameters [$i]." "0" $?
[ ! -f "$TARGET_DIR/$FileA" ] [ ! -f "$TARGET_DIR/$FileA" ]
@ -598,7 +482,7 @@ function test_skip_deletion () {
fi fi
# TRAVIS SPECIFIC - time limitation # TRAVIS SPECIFIC - time limitation
if [ "$RUNNING_ON_GITHUB_ACTIONS" != true ]; then if [ "$TRAVIS_RUN" != true ]; then
modes=('initiator' 'target' 'initiator,target') modes=('initiator' 'target' 'initiator,target')
else else
modes=('target') modes=('target')
@ -677,7 +561,7 @@ function test_handle_symlinks () {
fi fi
# Check with and without copySymlinks # Check with and without copySymlinks
copySymlinks=false copySymlinks="no"
echo "Running with COPY_SYMLINKS=$copySymlinks" echo "Running with COPY_SYMLINKS=$copySymlinks"
@ -759,12 +643,12 @@ function test_handle_symlinks () {
done done
# TRAVIS SPECIFIC - time limitation # TRAVIS SPECIFIC - time limitation
if [ "$RUNNING_ON_GITHUB_ACTIONS" != true ]; then if [ "$TRAVIS_RUN" != true ]; then
return 0 return 0
fi fi
# Check with and without copySymlinks # Check with and without copySymlinks
copySymlinks=true copySymlinks="yes"
echo "Running with COPY_SYMLINKS=$copySymlinks" echo "Running with COPY_SYMLINKS=$copySymlinks"
@ -878,13 +762,13 @@ function test_softdeletion_cleanup () {
touch "$file.new" touch "$file.new"
if [ "$RUNNING_ON_GITHUB_ACTIONS" == true ] || [ "$LOCAL_OS" == "BSD" ] || [ "$LOCAL_OS" == "MacOSX" ] || [ "$LOCAL_OS" == "WinNT10" ] || [ "$LOCAL_OS" == "msys" ] || [ "$LOCAL_OS" == "Cygwin" ]; then if [ "$TRAVIS_RUN" == true ] || [ "$LOCAL_OS" == "BSD" ] || [ "$LOCAL_OS" == "MacOSX" ] || [ "$LOCAL_OS" == "WinNT10" ] || [ "$LOCAL_OS" == "msys" ] || [ "$LOCAL_OS" == "Cygwin" ]; then
echo "Skipping changing ctime on file because travis / bsd / macos / Win10 / msys / cygwin does not support debugfs" echo "Skipping changing ctime on file because travis / bsd / macos / Win10 / msys / cygwin does not support debugfs"
else else
CreateOldFile "$file.old" CreateOldFile "$file.old"
fi fi
done done
if [ "$RUNNING_ON_GITHUB_ACTIONS" == true ] || [ "$LOCAL_OS" == "BSD" ] || [ "$LOCAL_OS" == "MacOSX" ] || [ "$LOCAL_OS" == "WinNT10" ] || [ "$LOCAL_OS" == "msys" ] || [ "$LOCAL_OS" == "Cygwin" ]; then if [ "$TRAVIS_RUN" == true ] || [ "$LOCAL_OS" == "BSD" ] || [ "$LOCAL_OS" == "MacOSX" ] || [ "$LOCAL_OS" == "WinNT10" ] || [ "$LOCAL_OS" == "msys" ] || [ "$LOCAL_OS" == "Cygwin" ]; then
echo "Skipping changing ctime on dir too" echo "Skipping changing ctime on dir too"
else else
CreateOldFile "$DirA" true CreateOldFile "$DirA" true
@ -899,7 +783,7 @@ function test_softdeletion_cleanup () {
[ -f "$file.new" ] [ -f "$file.new" ]
assertEquals "New softdeleted / backed up file [$file.new] exists." "0" $? assertEquals "New softdeleted / backed up file [$file.new] exists." "0" $?
if [ "$RUNNING_ON_GITHUB_ACTIONS" == true ] || [ "$LOCAL_OS" == "BSD" ] || [ "$LOCAL_OS" == "MacOSX" ] || [ "$LOCAL_OS" == "WinNT10" ] || [ "$LOCAL_OS" == "msys" ] || [ "$LOCAL_OS" == "Cygwin" ]; then if [ "$TRAVIS_RUN" == true ] || [ "$LOCAL_OS" == "BSD" ] || [ "$LOCAL_OS" == "MacOSX" ] || [ "$LOCAL_OS" == "WinNT10" ] || [ "$LOCAL_OS" == "msys" ] || [ "$LOCAL_OS" == "Cygwin" ]; then
[ ! -f "$file.old" ] [ ! -f "$file.old" ]
assertEquals "Old softdeleted / backed up file [$file.old] is deleted permanently." "0" $? assertEquals "Old softdeleted / backed up file [$file.old] is deleted permanently." "0" $?
else else
@ -908,7 +792,7 @@ function test_softdeletion_cleanup () {
fi fi
done done
if [ "$RUNNING_ON_GITHUB_ACTIONS" == true ] || [ "$LOCAL_OS" == "BSD" ] || [ "$LOCAL_OS" == "MacOSX" ] || [ "$LOCAL_OS" == "WinNT10" ] || [ "$LOCAL_OS" == "msys" ] || [ "$LOCAL_OS" == "Cygwin" ]; then if [ "$TRAVIS_RUN" == true ] || [ "$LOCAL_OS" == "BSD" ] || [ "$LOCAL_OS" == "MacOSX" ] || [ "$LOCAL_OS" == "WinNT10" ] || [ "$LOCAL_OS" == "msys" ] || [ "$LOCAL_OS" == "Cygwin" ]; then
[ ! -d "$DirA" ] [ ! -d "$DirA" ]
assertEquals "Old softdeleted / backed up directory [$dirA] is deleted permanently." "0" $? assertEquals "Old softdeleted / backed up directory [$dirA] is deleted permanently." "0" $?
[ ! -d "$DirB" ] [ ! -d "$DirB" ]
@ -925,7 +809,7 @@ function test_softdeletion_cleanup () {
function test_FileAttributePropagation () { function test_FileAttributePropagation () {
if [ "$RUNNING_ON_GITHUB_ACTIONS" == true ]; then if [ "$TRAVIS_RUN" == true ]; then
echo "Skipping FileAttributePropagation tests as travis does not support getfacl / setfacl." echo "Skipping FileAttributePropagation tests as travis does not support getfacl / setfacl."
return 0 return 0
fi fi
@ -935,11 +819,6 @@ function test_FileAttributePropagation () {
return 0 return 0
fi fi
SetConfFileValue "$CONF_DIR/$LOCAL_CONF" "PRESERVE_ACL" true
SetConfFileValue "$CONF_DIR/$LOCAL_CONF" "PRESERVE_XATTR" true
SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "PRESERVE_ACL" true
SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "PRESERVE_XATTR" true
for i in "${osyncParameters[@]}"; do for i in "${osyncParameters[@]}"; do
cd "$OSYNC_DIR" cd "$OSYNC_DIR"
PrepareLocalDirs PrepareLocalDirs
@ -1004,11 +883,6 @@ function test_FileAttributePropagation () {
getfacl "$INITIATOR_DIR/$DirD" | grep "other::-wx" > /dev/null getfacl "$INITIATOR_DIR/$DirD" | grep "other::-wx" > /dev/null
assertEquals "ACLs matched original value on initiator subdirectory." "0" $? assertEquals "ACLs matched original value on initiator subdirectory." "0" $?
done done
SetConfFileValue "$CONF_DIR/$LOCAL_CONF" "PRESERVE_ACL" false
SetConfFileValue "$CONF_DIR/$LOCAL_CONF" "PRESERVE_XATTR" false
SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "PRESERVE_ACL" false
SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "PRESERVE_XATTR" false
} }
function test_ConflictBackups () { function test_ConflictBackups () {
@ -1052,8 +926,8 @@ function test_MultipleConflictBackups () {
local additionalParameters local additionalParameters
# modify config files # modify config files
SetConfFileValue "$CONF_DIR/$LOCAL_CONF" "CONFLICT_BACKUP_MULTIPLE" true SetConfFileValue "$CONF_DIR/$LOCAL_CONF" "CONFLICT_BACKUP_MULTIPLE" "yes"
SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "CONFLICT_BACKUP_MULTIPLE" true SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "CONFLICT_BACKUP_MULTIPLE" "yes"
if [ "$OSYNC_MIN_VERSION" != "1" ]; then if [ "$OSYNC_MIN_VERSION" != "1" ]; then
additionalParameters="--errors-only --summary --no-prefix" additionalParameters="--errors-only --summary --no-prefix"
@ -1073,28 +947,28 @@ function test_MultipleConflictBackups () {
echo "$FileB" > "$TARGET_DIR/$FileB" echo "$FileB" > "$TARGET_DIR/$FileB"
# First run # First run
CONFLICT_BACKUP_MULTIPLE=true REMOTE_HOST_PING=$RHOST_PING $OSYNC_EXECUTABLE $i $additionalParameters CONFLICT_BACKUP_MULTIPLE=yes REMOTE_HOST_PING=$RHOST_PING $OSYNC_EXECUTABLE $i $additionalParameters
assertEquals "First deletion run with parameters [$i]." "0" $? assertEquals "First deletion run with parameters [$i]." "0" $?
echo "$FileA+" > "$TARGET_DIR/$FileA" echo "$FileA+" > "$TARGET_DIR/$FileA"
echo "$FileB+" > "$INITIATOR_DIR/$FileB" echo "$FileB+" > "$INITIATOR_DIR/$FileB"
# Second run # Second run
CONFLICT_BACKUP_MULTIPLE=true REMOTE_HOST_PING=$RHOST_PING $OSYNC_EXECUTABLE $i $additionalParameters CONFLICT_BACKUP_MULTIPLE=yes REMOTE_HOST_PING=$RHOST_PING $OSYNC_EXECUTABLE $i $additionalParameters
assertEquals "First deletion run with parameters [$i]." "0" $? assertEquals "First deletion run with parameters [$i]." "0" $?
echo "$FileA-" > "$TARGET_DIR/$FileA" echo "$FileA-" > "$TARGET_DIR/$FileA"
echo "$FileB-" > "$INITIATOR_DIR/$FileB" echo "$FileB-" > "$INITIATOR_DIR/$FileB"
# Third run # Third run
CONFLICT_BACKUP_MULTIPLE=true REMOTE_HOST_PING=$RHOST_PING $OSYNC_EXECUTABLE $i $additionalParameters CONFLICT_BACKUP_MULTIPLE=yes REMOTE_HOST_PING=$RHOST_PING $OSYNC_EXECUTABLE $i $additionalParameters
assertEquals "First deletion run with parameters [$i]." "0" $? assertEquals "First deletion run with parameters [$i]." "0" $?
echo "$FileA*" > "$TARGET_DIR/$FileA" echo "$FileA*" > "$TARGET_DIR/$FileA"
echo "$FileB*" > "$INITIATOR_DIR/$FileB" echo "$FileB*" > "$INITIATOR_DIR/$FileB"
# Fouth run # Fouth run
CONFLICT_BACKUP_MULTIPLE=true REMOTE_HOST_PING=$RHOST_PING $OSYNC_EXECUTABLE $i $additionalParameters CONFLICT_BACKUP_MULTIPLE=yes REMOTE_HOST_PING=$RHOST_PING $OSYNC_EXECUTABLE $i $additionalParameters
assertEquals "First deletion run with parameters [$i]." "0" $? assertEquals "First deletion run with parameters [$i]." "0" $?
# This test may fail only on 31th December at 23:59 :) # This test may fail only on 31th December at 23:59 :)
@ -1105,8 +979,8 @@ function test_MultipleConflictBackups () {
assertEquals "3 Backup files are present in [$TARGET_DIR/$OSYNC_BACKUP_DIR/]." "0" $? assertEquals "3 Backup files are present in [$TARGET_DIR/$OSYNC_BACKUP_DIR/]." "0" $?
done done
SetConfFileValue "$CONF_DIR/$LOCAL_CONF" "CONFLICT_BACKUP_MULTIPLE" false SetConfFileValue "$CONF_DIR/$LOCAL_CONF" "CONFLICT_BACKUP_MULTIPLE" "no"
SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "CONFLICT_BACKUP_MULTIPLE" false SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "CONFLICT_BACKUP_MULTIPLE" "no"
} }
function test_Locking () { function test_Locking () {
@ -1197,8 +1071,8 @@ function test_Locking () {
# Target lock present should be resumed if instance ID is NOT the same as current one but FORCE_STRANGER_UNLOCK=yes # Target lock present should be resumed if instance ID is NOT the same as current one but FORCE_STRANGER_UNLOCK=yes
SetConfFileValue "$CONF_DIR/$LOCAL_CONF" "FORCE_STRANGER_LOCK_RESUME" true SetConfFileValue "$CONF_DIR/$LOCAL_CONF" "FORCE_STRANGER_LOCK_RESUME" "yes"
SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "FORCE_STRANGER_LOCK_RESUME" true SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "FORCE_STRANGER_LOCK_RESUME" "yes"
for i in "${osyncParameters[@]}"; do for i in "${osyncParameters[@]}"; do
@ -1212,15 +1086,17 @@ function test_Locking () {
assertEquals "Should be able to resume when target has lock with different instance id but FORCE_STRANGER_UNLOCK=yes." "0" $? assertEquals "Should be able to resume when target has lock with different instance id but FORCE_STRANGER_UNLOCK=yes." "0" $?
done done
SetConfFileValue "$CONF_DIR/$LOCAL_CONF" "FORCE_STRANGER_LOCK_RESUME" false SetConfFileValue "$CONF_DIR/$LOCAL_CONF" "FORCE_STRANGER_LOCK_RESUME" "no"
SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "FORCE_STRANGER_LOCK_RESUME" false SetConfFileValue "$CONF_DIR/$REMOTE_CONF" "FORCE_STRANGER_LOCK_RESUME" "no"
} }
function test_ConflictDetetion () { function test_ConflictDetetion () {
# Tests compatible with v1.4+ local result
if [ $OSYNC_MIN_VERSION -lt 4 ]; then # Tests compatible with v1.3+
echo "Skipping conflict detection test because osync min version is $OSYNC_MIN_VERSION (must be 4 at least)."
if [ $OSYNC_MIN_VERSION -lt 3 ]; then
echo "Skipping conflict detection test because osync min version is $OSYNC_MIN_VERSION (must be 3 at least)."
return 0 return 0
fi fi
@ -1238,7 +1114,7 @@ function test_ConflictDetetion () {
touch "$TARGET_DIR/$FileA" touch "$TARGET_DIR/$FileA"
# Initializing treeList # Initializing treeList
REMOTE_HOST_PING=$RHOST_PING _PARANOIA_DEBUG=no $OSYNC_EXECUTABLE $i --initialize REMOTE_HOST_PING=$RHOST_PING $OSYNC_EXECUTABLE $i --initialize
assertEquals "Initialization run with parameters [$i]." "0" $? assertEquals "Initialization run with parameters [$i]." "0" $?
# Now modifying files on both sides # Now modifying files on both sides
@ -1250,21 +1126,19 @@ function test_ConflictDetetion () {
# Now run should return conflicts # Now run should return conflicts
REMOTE_HOST_PING=$RHOST_PING $OSYNC_EXECUTABLE $i --log-conflicts > "$FAKEROOT/output2.log" 2>&1 REMOTE_HOST_PING=$RHOST_PING $OSYNC_EXECUTABLE $i --log-conflicts > "$FAKEROOT/output.log" 2>&1
assertEquals "Second run that should detect conflicts with parameters [$i]." "0" $? result=$?
cat "$FAKEROOT/output.log"
assertEquals "Second run that should detect conflicts with parameters [$i]." "0" $result
cat "$FAKEROOT/output2.log" grep "$INITIATOR_DIR/$FileA << >> $TARGET_DIR/$FileA" "$FAKEROOT/output.log"
#WIP TODO change output.log from output2.log for debug reasons
grep "$INITIATOR_DIR/$FileA << >> $TARGET_DIR/$FileA" "$FAKEROOT/output2.log"
assertEquals "FileA conflict detect with parameters [$i]." "0" $? assertEquals "FileA conflict detect with parameters [$i]." "0" $?
grep "$INITIATOR_DIR/$FileB << >> $TARGET_DIR/$FileB" "$FAKEROOT/output2.log" grep "$INITIATOR_DIR/$FileB << >> $TARGET_DIR/$FileB" "$FAKEROOT/output.log"
assertEquals "FileB conflict detect with parameters [$i]." "0" $? assertEquals "FileB conflict detect with parameters [$i]." "0" $?
# TODO: Missing test for conflict prevalance (once we have FORCE_CONFLICT_PREVALANCE # TODO: Missing test for conflict prevalance (once we have FORCE_CONFLICT_PREVALANCE
done done
return 0
} }
function test_WaitForTaskCompletion () { function test_WaitForTaskCompletion () {
@ -1424,6 +1298,7 @@ function test_ParallelExec () {
function test_timedExecution () { function test_timedExecution () {
local arguments local arguments
local warnExitCode
# Clever usage of indexes and exit codes # Clever usage of indexes and exit codes
# osync exits with 0 when no problem detected # osync exits with 0 when no problem detected
@ -1485,7 +1360,7 @@ function test_UpgradeConfRun () {
assertEquals "Conf file upgrade" "0" $? assertEquals "Conf file upgrade" "0" $?
# Update remote conf files with SSH port # Update remote conf files with SSH port
sed -i.tmp 's#ssh://.*@localhost:[0-9]*/${homedir}/osync-tests/target#ssh://'$REMOTE_USER'@localhost:'$SSH_PORT'/${homedir}/osync-tests/target#' "$CONF_DIR/$TMP_OLD_CONF" sed -i.tmp 's#ssh://.*@localhost:[0-9]*/${HOME}/osync-tests/target#ssh://'$REMOTE_USER'@localhost:'$SSH_PORT'/${HOME}/osync-tests/target#' "$CONF_DIR/$TMP_OLD_CONF"
$OSYNC_EXECUTABLE "$CONF_DIR/$TMP_OLD_CONF" $OSYNC_EXECUTABLE "$CONF_DIR/$TMP_OLD_CONF"
assertEquals "Upgraded conf file execution test" "0" $? assertEquals "Upgraded conf file execution test" "0" $?
@ -1515,7 +1390,6 @@ function test_DaemonMode () {
$OSYNC_EXECUTABLE "$CONF_DIR/$LOCAL_CONF" --on-changes & $OSYNC_EXECUTABLE "$CONF_DIR/$LOCAL_CONF" --on-changes &
pid=$! pid=$!
#TODO: Lower that value when dispatecher is written
# Trivial value of 2xMIN_WAIT from config files # Trivial value of 2xMIN_WAIT from config files
echo "Sleeping for 120s" echo "Sleeping for 120s"
sleep 120 sleep 120

View File

@ -1,147 +0,0 @@
Coding Standards
================
shFlags is more than just a simple 20 line shell script. It is a pretty
significant library of shell code that at first glance is not that easy to
understand. To improve code readability and usability, some guidelines have been
set down to make the code more understandable for anyone who wants to read or
modify it.
Function declaration
--------------------
Declare functions using the following form:
```sh
doSomething() {
echo 'done!'
}
```
One-line functions are allowed if they can fit within the 80 char line limit.
```sh
doSomething() { echo 'done!'; }
```
Function documentation
----------------------
Each function should be preceded by a header that provides the following:
1. A one-sentence summary of what the function does.
1. (optional) A longer description of what the function does, and perhaps some
special information that helps convey its usage better.
1. Args: a one-line summary of each argument of the form:
`name: type: description`
1. Output: a one-line summary of the output provided. Only output to STDOUT
must be documented, unless the output to STDERR is of significance (i.e. not
just an error message). The output should be of the form:
`type: description`
1. Returns: a one-line summary of the value returned. Returns in shell are
always integers, but if the output is a true/false for success (i.e. a
boolean), it should be noted. The output should be of the form:
`type: description`
Here is a sample header:
```
# Return valid getopt options using currently defined list of long options.
#
# This function builds a proper getopt option string for short (and long)
# options, using the current list of long options for reference.
#
# Args:
# _flags_optStr: integer: option string type (__FLAGS_OPTSTR_*)
# Output:
# string: generated option string for getopt
# Returns:
# boolean: success of operation (always returns True)
```
Variable and function names
---------------------------
All shFlags specific constants, variables, and functions will be prefixed
appropriately with 'flags'. This is to distinguish usage in the shFlags code
from users own scripts so that the shell name space remains predictable to
users. The exceptions here are the standard `assertEquals`, etc. functions.
All non built-in constants and variables will be surrounded with squiggle
brackets, e.g. `${flags_someVariable}` to improve code readability.
Due to some shells not supporting local variables in functions, care in the
naming and use of variables, both public and private, is very important.
Accidental overriding of the variables can occur easily if care is not taken as
all variables are technically global variables in some shells.
Type | Sample
---- | ------
global public constant | `FLAGS_TRUE`
global private constant | `__FLAGS_SHELL_FLAGS`
global public variable | `flags_variable`
global private variable | `__flags_variable`
global macro | `_FLAGS_SOME_MACRO_`
public function | `flags_function`
public function, local variable | ``flags_variable_`
private function | `_flags_function`
private function, local variable | `_flags_variable_`
Where it makes sense to improve readability, variables can have the first
letter of the second and later words capitalized. For example, the local
variable name for the help string length is `flags_helpStrLen_`.
There are three special-case global public variables used. They are used due to
overcome the limitations of shell scoping or to prevent forking. The three
variables are:
- `flags_error`
- `flags_output`
- `flags_return`
Local variable cleanup
----------------------
As many shells do not support local variables, no support for cleanup of
variables is present either. As such, all variables local to a function must be
cleared up with the `unset` built-in command at the end of each function.
Indentation
-----------
Code block indentation is two (2) spaces, and tabs may not be used.
```sh
if [ -z 'some string' ]; then
someFunction
fi
```
Lines of code should be no longer than 80 characters unless absolutely
necessary. When lines are wrapped using the backslash character '\', subsequent
lines should be indented with four (4) spaces so as to differentiate from the
standard spacing of two characters, and tabs may not be used.
```sh
for x in some set of very long set of arguments that make for a very long \
that extends much too long for one line
do
echo ${x}
done
```
When a conditional expression is written using the built-in [ command, and that
line must be wrapped, place the control || or && operators on the same line as
the expression where possible, with the list to be executed on its own line.
```sh
[ -n 'some really long expression' -a -n 'some other long expr' ] && \
echo 'that was actually true!'
```

View File

@ -1,15 +1,10 @@
# shUnit2 # shUnit2
shUnit2 is a [xUnit](http://en.wikipedia.org/wiki/XUnit) unit test framework for shUnit2 is a [xUnit](http://en.wikipedia.org/wiki/XUnit) unit test framework for Bourne based shell scripts, and it is designed to work in a similar manner to [JUnit](http://www.junit.org), [PyUnit](http://pyunit.sourceforge.net), etc.. If you have ever had the desire to write a unit test for a shell script, shUnit2 can do the job.
Bourne based shell scripts, and it is designed to work in a similar manner to
[JUnit](http://www.junit.org), [PyUnit](http://pyunit.sourceforge.net), etc.. If
you have ever had the desire to write a unit test for a shell script, shUnit2
can do the job.
[![Travis CI](https://api.travis-ci.com/kward/shunit2.svg)](https://app.travis-ci.com/github/kward/shunit2) [![Travis CI](https://img.shields.io/travis/kward/shunit2.svg)](https://travis-ci.org/kward/shunit2)
## Table of Contents ## Table of Contents
* [Introduction](#introduction) * [Introduction](#introduction)
* [Credits / Contributors](#credits-contributors) * [Credits / Contributors](#credits-contributors)
* [Feedback](#feedback) * [Feedback](#feedback)
@ -26,76 +21,47 @@ can do the job.
* [Error Handling](#error-handling) * [Error Handling](#error-handling)
* [Including Line Numbers in Asserts (Macros)](#including-line-numbers-in-asserts-macros) * [Including Line Numbers in Asserts (Macros)](#including-line-numbers-in-asserts-macros)
* [Test Skipping](#test-skipping) * [Test Skipping](#test-skipping)
* [Running specific tests from the command line](#cmd-line-args)
* [Appendix](#appendix) * [Appendix](#appendix)
* [Getting help](#getting-help) * [Getting help](#getting-help)
* [Zsh](#zsh) * [Zsh](#zsh)
--- ---
## <a name="introduction"></a> Introduction ## <a name="introduction"></a> Introduction
shUnit2 was originally developed to provide a consistent testing solution for [log4sh][log4sh], a shell based logging framework similar to [log4j](http://logging.apache.org). During the development of that product, a repeated problem of having things work just fine under one shell (`/bin/bash` on Linux to be specific), and then not working under another shell (`/bin/sh` on Solaris) kept coming up. Although several simple tests were run, they were not adequate and did not catch some corner cases. The decision was finally made to write a proper unit test framework after multiple brown-bag releases were made. _Research was done to look for an existing product that met the testing requirements, but no adequate product was found._
shUnit2 was originally developed to provide a consistent testing solution for Tested Operating Systems (varies over time)
[log4sh][log4sh], a shell based logging framework similar to
[log4j](http://logging.apache.org). During the development of that product, a
repeated problem of having things work just fine under one shell (`/bin/bash` on
Linux to be specific), and then not working under another shell (`/bin/sh` on
Solaris) kept coming up. Although several simple tests were run, they were not
adequate and did not catch some corner cases. The decision was finally made to
write a proper unit test framework after multiple brown-bag releases were made.
_Research was done to look for an existing product that met the testing
requirements, but no adequate product was found._
### Tested software * Cygwin
* FreeBSD (user supported)
* Linux (Gentoo, Ubuntu)
* Mac OS X
* Solaris 8, 9, 10 (inc. OpenSolaris)
**Tested Operating Systems** (varies over time) Tested Shells
OS | Support | Verified
----------------------------------- | --------- | --------
Ubuntu Linux (14.04.05 LTS) | Travis CI | continuous
macOS High Sierra (10.13.3) | Travis CI | continuous
FreeBSD | user | unknown
Solaris 8, 9, 10 (inc. OpenSolaris) | user | unknown
Cygwin | user | unknown
**Tested Shells**
* Bourne Shell (__sh__) * Bourne Shell (__sh__)
* BASH - GNU Bourne Again SHell (__bash__) * BASH - GNU Bourne Again SHell (__bash__)
* DASH - Debian Almquist Shell (__dash__) * DASH (__dash__)
* Korn Shell - AT&T version of the Korn shell (__ksh__) * Korn Shell (__ksh__)
* mksh - MirBSD Korn Shell (__mksh__) * pdksh - Public Domain Korn Shell (__pdksh__)
* zsh - Zsh (__zsh__) (since 2.1.2) _please see the Zsh shell errata for more information_ * zsh - Zsh (__zsh__) (since 2.1.2) _please see the Zsh shell errata for more information_
See the appropriate Release Notes for this release See the appropriate Release Notes for this release (`doc/RELEASE_NOTES-X.X.X.txt`) for the list of actual versions tested.
(`doc/RELEASE_NOTES-X.X.X.txt`) for the list of actual versions tested.
### <a name="credits-contributors"></a> Credits / Contributors ### <a name="credits-contributors"></a> Credits / Contributors
A list of contributors to shUnit2 can be found in `doc/contributors.md`. Many thanks go out to all those who have contributed to make this a better tool.
A list of contributors to shUnit2 can be found in `doc/contributors.md`. Many shUnit2 is the original product of many hours of work by Kate Ward, the primary author of the code. For related software, check out https://github.com/kward.
thanks go out to all those who have contributed to make this a better tool.
shUnit2 is the original product of many hours of work by Kate Ward, the primary
author of the code. For related software, check out https://github.com/kward.
### <a name="feedback"></a> Feedback ### <a name="feedback"></a> Feedback
Feedback is most certainly welcome for this document. Send your additions, comments and criticisms to the shunit2-users@google.com mailing list.
Feedback is most certainly welcome for this document. Send your questions,
comments, and criticisms via the
[shunit2-users](https://groups.google.com/a/forestent.com/forum/#!forum/shunit2-users/new)
forum (created 2018-12-09), or file an issue via
https://github.com/kward/shunit2/issues.
--- ---
## <a name="quickstart"></a> Quickstart ## <a name="quickstart"></a> Quickstart
This section will give a very quick start to running unit tests with shUnit2. More information is located in later sections.
This section will give a very quick start to running unit tests with shUnit2. Here is a quick sample script to show how easy it is to write a unit test in shell. _Note: the script as it stands expects that you are running it from the "examples" directory._
More information is located in later sections.
Here is a quick sample script to show how easy it is to write a unit test in
shell. _Note: the script as it stands expects that you are running it from the
"examples" directory._
```sh ```sh
#! /bin/sh #! /bin/sh
@ -106,7 +72,7 @@ testEquality() {
} }
# Load shUnit2. # Load shUnit2.
. ../shunit2 . ./shunit2
``` ```
Running the unit test should give results similar to the following. Running the unit test should give results similar to the following.
@ -121,38 +87,14 @@ Ran 1 test.
OK OK
``` ```
W00t! You've just run your first successful unit test. So, what just happened? W00t! You've just run your first successful unit test. So, what just happened? Quite a bit really, and it all happened simply by sourcing the `shunit2` library. The basic functionality for the script above goes like this:
Quite a bit really, and it all happened simply by sourcing the `shunit2`
library. The basic functionality for the script above goes like this:
* When shUnit2 is sourced, it will walk through any functions defined whose name * When shUnit2 is sourced, it will walk through any functions defined whose name starts with the string `test`, and add those to an internal list of tests to execute. Once a list of test functions to be run has been determined, shunit2 will go to work.
starts with the string `test`, and add those to an internal list of tests to * Before any tests are executed, shUnit2 again looks for a function, this time one named `oneTimeSetUp()`. If it exists, it will be run. This function is normally used to setup the environment for all tests to be run. Things like creating directories for output or setting environment variables are good to place here. Just so you know, you can also declare a corresponding function named `oneTimeTearDown()` function that does the same thing, but once all the tests have been completed. It is good for removing temporary directories, etc.
execute. Once a list of test functions to be run has been determined, shunit2 * shUnit2 is now ready to run tests. Before doing so though, it again looks for another function that might be declared, one named `setUp()`. If the function exists, it will be run before each test. It is good for resetting the environment so that each test starts with a clean slate. **At this stage, the first test is finally run.** The success of the test is recorded for a report that will be generated later. After the test is run, shUnit2 looks for a final function that might be declared, one named `tearDown()`. If it exists, it will be run after each test. It is a good place for cleaning up after each test, maybe doing things like removing files that were created, or removing directories. This set of steps, `setUp() > test() > tearDown()`, is repeated for all of the available tests.
will go to work. * Once all the work is done, shUnit2 will generate the nice report you saw above. A summary of all the successes and failures will be given so that you know how well your code is doing.
* Before any tests are executed, shUnit2 again looks for a function, this time
one named `oneTimeSetUp()`. If it exists, it will be run. This function is
normally used to setup the environment for all tests to be run. Things like
creating directories for output or setting environment variables are good to
place here. Just so you know, you can also declare a corresponding function
named `oneTimeTearDown()` function that does the same thing, but once all the
tests have been completed. It is good for removing temporary directories, etc.
* shUnit2 is now ready to run tests. Before doing so though, it again looks for
another function that might be declared, one named `setUp()`. If the function
exists, it will be run before each test. It is good for resetting the
environment so that each test starts with a clean slate. **At this stage, the
first test is finally run.** The success of the test is recorded for a report
that will be generated later. After the test is run, shUnit2 looks for a final
function that might be declared, one named `tearDown()`. If it exists, it will
be run after each test. It is a good place for cleaning up after each test,
maybe doing things like removing files that were created, or removing
directories. This set of steps, `setUp() > test() > tearDown()`, is repeated
for all of the available tests.
* Once all the work is done, shUnit2 will generate the nice report you saw
above. A summary of all the successes and failures will be given so that you
know how well your code is doing.
We should now try adding a test that fails. Change your unit test to look like We should now try adding a test that fails. Change your unit test to look like this.
this.
```sh ```sh
#! /bin/sh #! /bin/sh
@ -168,30 +110,12 @@ testPartyLikeItIs1999() {
} }
# Load shUnit2. # Load shUnit2.
. ../shunit2 . ./shunit2
``` ```
So, what did you get? I guess it told you that this isn't 1999. Bummer, eh? So, what did you get? I guess it told you that this isn't 1999. Bummer, eh? Hopefully, you noticed a couple of things that were different about the second test. First, we added an optional message that the user will see if the assert fails. Second, we did comparisons of strings instead of integers as in the first test. It doesn't matter whether you are testing for equality of strings or integers. Both work equally well with shUnit2.
Hopefully, you noticed a couple of things that were different about the second
test. First, we added an optional message that the user will see if the assert
fails. Second, we did comparisons of strings instead of integers as in the first
test. It doesn't matter whether you are testing for equality of strings or
integers. Both work equally well with shUnit2.
Hopefully, this is enough to get you started with unit testing. If you want a Hopefully, this is enough to get you started with unit testing. If you want a ton more examples, take a look at the tests provided with [log4sh][log4sh] or [shFlags][shflags]. Both provide excellent examples of more advanced usage. shUnit2 was after all written to meet the unit testing need that [log4sh][log4sh] had.
ton more examples, take a look at the tests provided with [log4sh][log4sh] or
[shFlags][shflags]. Both provide excellent examples of more advanced usage.
shUnit2 was after all written to meet the unit testing need that
[log4sh][log4sh] had.
If you are using distribution packaged shUnit2 which is accessible from
`/usr/bin/shunit2` such as Debian, you can load shUnit2 without specifying its
path. So the last 2 lines in the above can be replaced by:
```sh
# Load shUnit2.
. shunit2
```
--- ---
@ -199,212 +123,139 @@ path. So the last 2 lines in the above can be replaced by:
### <a name="general-info"></a> General Info ### <a name="general-info"></a> General Info
Any string values passed should be properly quoted -- they should be Any string values passed should be properly quoted -- they should must be surrounded by single-quote (`'`) or double-quote (`"`) characters -- so that the shell will properly parse them.
surrounded by single-quote (`'`) or double-quote (`"`) characters -- so that the
shell will properly parse them.
### <a name="asserts"></a> Asserts ### <a name="asserts"></a> Asserts
assertEquals [message] expected actual `assertEquals [message] expected actual`
Asserts that _expected_ and _actual_ are equal to one another. The _expected_ Asserts that _expected_ and _actual_ are equal to one another. The _expected_ and _actual_ values can be either strings or integer values as both will be treated as strings. The _message_ is optional, and must be quoted.
and _actual_ values can be either strings or integer values as both will be
treated as strings. The _message_ is optional, and must be quoted.
assertNotEquals [message] unexpected actual `assertNotEquals [message] unexpected actual`
Asserts that _unexpected_ and _actual_ are not equal to one another. The Asserts that _unexpected_ and _actual_ are not equal to one another. The _unexpected_ and _actual_ values can be either strings or integer values as both will be treaded as strings. The _message_ is optional, and must be quoted.
_unexpected_ and _actual_ values can be either strings or integer values as both
will be treated as strings. The _message_ is optional, and must be quoted.
assertSame [message] expected actual `assertSame [message] expected actual`
This function is functionally equivalent to `assertEquals`. This function is functionally equivalent to `assertEquals`.
assertNotSame [message] unexpected actual `assertNotSame [message] unexpected actual`
This function is functionally equivalent to `assertNotEquals`. This function is functionally equivalent to `assertNotEquals`.
assertContains [message] container content `assertNull [message] value`
Asserts that _container_ contains _content_. The _container_ and _content_ Asserts that _value_ is _null_, or in shell terms, a zero-length string. The _value_ must be a string as an integer value does not translate into a zero-length string. The _message_ is optional, and must be quoted.
values can be either strings or integer values as both will be treated as
strings. The _message_ is optional, and must be quoted.
assertNotContains [message] container content `assertNotNull [message] value`
Asserts that _container_ does not contain _content_. The _container_ and Asserts that _value_ is _not null_, or in shell terms, a non-empty string. The _value_ may be a string or an integer as the later will be parsed as a non-empty string value. The _message_ is optional, and must be quoted.
_content_ values can be either strings or integer values as both will be treated
as strings. The _message_ is optional, and must be quoted.
assertNull [message] value `assertTrue [message] condition`
Asserts that _value_ is _null_, or in shell terms, a zero-length string. The Asserts that a given shell test _condition_ is _true_. The condition can be as simple as a shell _true_ value (the value `0` -- equivalent to `${SHUNIT_TRUE}`), or a more sophisticated shell conditional expression. The _message_ is optional, and must be quoted.
_value_ must be a string as an integer value does not translate into a zero-
length string. The _message_ is optional, and must be quoted.
assertNotNull [message] value A sophisticated shell conditional expression is equivalent to what the __if__ or __while__ shell built-ins would use (more specifically, what the __test__ command would use). Testing for example whether some value is greater than another value can be done this way.
Asserts that _value_ is _not null_, or in shell terms, a non-empty string. The `assertTrue "[ 34 -gt 23 ]"`
_value_ may be a string or an integer as the latter will be parsed as a non-empty
string value. The _message_ is optional, and must be quoted.
assertTrue [message] condition Testing for the ability to read a file can also be done. This particular test will fail.
Asserts that a given shell test _condition_ is _true_. The condition can be as `assertTrue 'test failed' "[ -r /some/non-existant/file' ]"`
simple as a shell _true_ value (the value `0` -- equivalent to
`${SHUNIT_TRUE}`), or a more sophisticated shell conditional expression. The
_message_ is optional, and must be quoted.
A sophisticated shell conditional expression is equivalent to what the __if__ or As the expressions are standard shell __test__ expressions, it is possible to string multiple expressions together with `-a` and `-o` in the standard fashion. This test will succeed as the entire expression evaluates to _true_.
__while__ shell built-ins would use (more specifically, what the __test__
command would use). Testing for example whether some value is greater than
another value can be done this way.
assertTrue "[ 34 -gt 23 ]" `assertTrue 'test failed' '[ 1 -eq 1 -a 2 -eq 2 ]'`
Testing for the ability to read a file can also be done. This particular test _One word of warning: be very careful with your quoting as shell is not the most forgiving of bad quoting, and things will fail in strange ways._
will fail.
assertTrue 'test failed' "[ -r /some/non-existant/file ]" `assertFalse [message] condition`
As the expressions are standard shell __test__ expressions, it is possible to Asserts that a given shell test _condition_ is _false_. The condition can be as simple as a shell _false_ value (the value `1` -- equivalent to `${SHUNIT_FALSE}`), or a more sophisticated shell conditional expression. The _message_ is optional, and must be quoted.
string multiple expressions together with `-a` and `-o` in the standard fashion.
This test will succeed as the entire expression evaluates to _true_.
assertTrue 'test failed' '[ 1 -eq 1 -a 2 -eq 2 ]'
<i>One word of warning: be very careful with your quoting as shell is not the
most forgiving of bad quoting, and things will fail in strange ways.</i>
assertFalse [message] condition
Asserts that a given shell test _condition_ is _false_. The condition can be as
simple as a shell _false_ value (the value `1` -- equivalent to
`${SHUNIT_FALSE}`), or a more sophisticated shell conditional expression. The
_message_ is optional, and must be quoted.
_For examples of more sophisticated expressions, see `assertTrue`._ _For examples of more sophisticated expressions, see `assertTrue`._
### <a name="failures"></a> Failures ### <a name="failures"></a> Failures
Just to clarify, failures __do not__ test the various arguments against one Just to clarify, failures __do not__ test the various arguments against one another. Failures simply fail, optionally with a message, and that is all they do. If you need to test arguments against one another, use asserts.
another. Failures simply fail, optionally with a message, and that is all they
do. If you need to test arguments against one another, use asserts.
If all failures do is fail, why might one use them? There are times when you may If all failures do is fail, why might one use them? There are times when you may have some very complicated logic that you need to test, and the simple asserts provided are simply not adequate. You can do your own validation of the code, use an `assertTrue ${SHUNIT_TRUE}` if your own tests succeeded, and use a failure to record a failure.
have some very complicated logic that you need to test, and the simple asserts
provided are simply not adequate. You can do your own validation of the code,
use an `assertTrue ${SHUNIT_TRUE}` if your own tests succeeded, and use a
failure to record a failure.
fail [message] `fail [message]`
Fails the test immediately. The _message_ is optional, and must be quoted. Fails the test immediately. The _message_ is optional, and must be quoted.
failNotEquals [message] unexpected actual `failNotEquals [message] unexpected actual`
Fails the test immediately, reporting that the _unexpected_ and _actual_ values Fails the test immediately, reporting that the _unexpected_ and _actual_ values are not equal to one another. The _message_ is optional, and must be quoted.
are not equal to one another. The _message_ is optional, and must be quoted.
_Note: no actual comparison of unexpected and actual is done._ _Note: no actual comparison of unexpected and actual is done._
failSame [message] expected actual `failSame [message] expected actual`
Fails the test immediately, reporting that the _expected_ and _actual_ values Fails the test immediately, reporting that the _expected_ and _actual_ values are the same. The _message_ is optional, and must be quoted.
are the same. The _message_ is optional, and must be quoted.
_Note: no actual comparison of expected and actual is done._ _Note: no actual comparison of expected and actual is done._
failNotSame [message] expected actual `failNotSame [message] expected actual`
Fails the test immediately, reporting that the _expected_ and _actual_ values Fails the test immediately, reporting that the _expected_ and _actual_ values are not the same. The _message_ is optional, and must be quoted.
are not the same. The _message_ is optional, and must be quoted.
_Note: no actual comparison of expected and actual is done._ _Note: no actual comparison of expected and actual is done._
failFound [message] content
Fails the test immediately, reporting that the _content_ was found. The
_message_ is optional, and must be quoted.
_Note: no actual search of content is done._
failNotFound [message] content
Fails the test immediately, reporting that the _content_ was not found. The
_message_ is optional, and must be quoted.
_Note: no actual search of content is done._
### <a name="setup-teardown"></a> Setup/Teardown ### <a name="setup-teardown"></a> Setup/Teardown
oneTimeSetUp `oneTimeSetUp`
This function can be optionally overridden by the user in their test suite. This function can be be optionally overridden by the user in their test suite.
If this function exists, it will be called once before any tests are run. It is If this function exists, it will be called once before any tests are run. It is useful to prepare a common environment for all tests.
useful to prepare a common environment for all tests.
oneTimeTearDown `oneTimeTearDown`
This function can be optionally overridden by the user in their test suite. This function can be be optionally overridden by the user in their test suite.
If this function exists, it will be called once after all tests are completed. If this function exists, it will be called once after all tests are completed. It is useful to clean up the environment after all tests.
It is useful to clean up the environment after all tests.
setUp `setUp`
This function can be optionally overridden by the user in their test suite. This function can be be optionally overridden by the user in their test suite.
If this function exists, it will be called before each test is run. It is useful If this function exists, it will be called before each test is run. It is useful to reset the environment before each test.
to reset the environment before each test.
tearDown `tearDown`
This function can be optionally overridden by the user in their test suite. This function can be be optionally overridden by the user in their test suite.
If this function exists, it will be called after each test completes. It is If this function exists, it will be called after each test completes. It is useful to clean up the environment after each test.
useful to clean up the environment after each test.
### <a name="skipping"></a> Skipping ### <a name="skipping"></a> Skipping
startSkipping `startSkipping`
This function forces the remaining _assert_ and _fail_ functions to be This function forces the remaining _assert_ and _fail_ functions to be "skipped", i.e. they will have no effect. Each function skipped will be recorded so that the total of asserts and fails will not be altered.
"skipped", i.e. they will have no effect. Each function skipped will be recorded
so that the total of asserts and fails will not be altered.
endSkipping `endSkipping`
This function returns calls to the _assert_ and _fail_ functions to their This function returns calls to the _assert_ and _fail_ functions to their default behavior, i.e. they will be called.
default behavior, i.e. they will be called.
isSkipping `isSkipping`
This function returns the current state of skipping. It can be compared against This function returns the current state of skipping. It can be compared against `${SHUNIT_TRUE}` or `${SHUNIT_FALSE}` if desired.
`${SHUNIT_TRUE}` or `${SHUNIT_FALSE}` if desired.
### <a name="suites"></a> Suites ### <a name="suites"></a> Suites
The default behavior of shUnit2 is that all tests will be found dynamically. If The default behavior of shUnit2 is that all tests will be found dynamically. If you have a specific set of tests you want to run, or you don't want to use the standard naming scheme of prefixing your tests with `test`, these functions are for you. Most users will never use them though.
you have a specific set of tests you want to run, or you don't want to use the
standard naming scheme of prefixing your tests with `test`, these functions are
for you. Most users will never use them though.
suite `suite`
This function can be optionally overridden by the user in their test suite. This function can be optionally overridden by the user in their test suite.
If this function exists, it will be called when `shunit2` is sourced. If it does If this function exists, it will be called when `shunit2` is sourced. If it does not exist, shUnit2 will search the parent script for all functions beginning with the word `test`, and they will be added dynamically to the test suite.
not exist, shUnit2 will search the parent script for all functions beginning
with the word `test`, and they will be added dynamically to the test suite.
suite_addTest name `suite_addTest name`
This function adds a function named _name_ to the list of tests scheduled for This function adds a function named _name_ to the list of tests scheduled for execution as part of this test suite. This function should only be called from within the `suite()` function.
execution as part of this test suite. This function should only be called from
within the `suite()` function.
--- ---
@ -412,8 +263,7 @@ within the `suite()` function.
### <a name="some-constants-you-can-use"></a> Some constants you can use ### <a name="some-constants-you-can-use"></a> Some constants you can use
There are several constants provided by shUnit2 as variables that might be of There are several constants provided by shUnit2 as variables that might be of use to you.
use to you.
*Predefined* *Predefined*
@ -430,32 +280,22 @@ use to you.
| Constant | Value | | Constant | Value |
| ----------------- | ----- | | ----------------- | ----- |
| SHUNIT\_CMD\_EXPR | Override which `expr` command is used. By default `expr` is used, except on BSD systems where `gexpr` is used. | | SHUNIT\_CMD\_EXPR | Override which `expr` command is used. By default `expr` is used, except on BSD systems where `gexpr` is used. |
| SHUNIT\_COLOR | Enable colorized output. Options are 'auto', 'always', or 'none', with 'auto' being the default. | | SHUNIT\_COLOR | Enable colorized output. Options are 'auto', 'always', or 'never', with 'auto' being the default. |
| SHUNIT\_PARENT | The filename of the shell script containing the tests. This is needed specifically for Zsh support. | | SHUNIT\_PARENT | The filename of the shell script containing the tests. This is needed specifically for Zsh support. |
| SHUNIT\_TEST\_PREFIX | Define this variable to add a prefix in front of each test name that is output in the test report. | | SHUNIT\_TEST\_PREFIX | Define this variable to add a prefix in front of each test name that is output in the test report. |
### <a name="error-handling"></a> Error handling ### <a name="error-handling"></a> Error handling
The constants values `SHUNIT_TRUE`, `SHUNIT_FALSE`, and `SHUNIT_ERROR` are The constants values `SHUNIT_TRUE`, `SHUNIT_FALSE`, and `SHUNIT_ERROR` are returned from nearly every function to indicate the success or failure of the function. Additionally the variable `flags_error` is filled with a detailed error message if any function returns with a `SHUNIT_ERROR` value.
returned from nearly every function to indicate the success or failure of the
function. Additionally the variable `flags_error` is filled with a detailed
error message if any function returns with a `SHUNIT_ERROR` value.
### <a name="including-line-numbers-in-asserts-macros"></a> Including Line Numbers in Asserts (Macros) ### <a name="including-line-numbers-in-asserts-macros"></a> Including Line Numbers in Asserts (Macros)
If you include lots of assert statements in an individual test function, it can If you include lots of assert statements in an individual test function, it can become difficult to determine exactly which assert was thrown unless your messages are unique. To help somewhat, line numbers can be included in the assert messages. To enable this, a special shell "macro" must be used rather than the standard assert calls. _Shell doesn't actually have macros; the name is used here as the operation is similar to a standard macro._
become difficult to determine exactly which assert was thrown unless your
messages are unique. To help somewhat, line numbers can be included in the
assert messages. To enable this, a special shell "macro" must be used rather
than the standard assert calls. _Shell doesn't actually have macros; the name is
used here as the operation is similar to a standard macro._
For example, to include line numbers for a `assertEquals()` function call, For example, to include line numbers for a `assertEquals()` function call, replace the `assertEquals()` with `${_ASSERT_EQUALS_}`.
replace the `assertEquals()` with `${_ASSERT_EQUALS_}`.
_**Example** -- Asserts with and without line numbers_ _**Example** -- Asserts with and without line numbers_
```sh
```shell
#! /bin/sh #! /bin/sh
# file: examples/lineno_test.sh # file: examples/lineno_test.sh
@ -469,36 +309,20 @@ testLineNo() {
} }
# Load shUnit2. # Load shUnit2.
. ../shunit2 . ./shunit2
``` ```
Notes: Notes:
1. Due to how shell parses command-line arguments, _**all strings used with 1. Due to how shell parses command-line arguments, all strings used with macros should be quoted twice. Namely, single-quotes must be converted to single-double-quotes, and vice-versa. If the string being passed is absolutely for sure not empty, the extra quoting is not necessary.<br/><br/>Normal `assertEquals` call.<br/>`assertEquals 'some message' 'x' ''`<br/><br/>Macro `_ASSERT_EQUALS_` call. Note the extra quoting around the _message_ and the _null_ value.<br/>`_ASSERT_EQUALS_ '"some message"' 'x' '""'`
macros should be quoted twice**_. Namely, single-quotes must be converted to single-double-quotes, and vice-versa.<br/>
<br/>
Normal `assertEquals` call.<br/>
`assertEquals 'some message' 'x' ''`<br/>
<br/>
Macro `_ASSERT_EQUALS_` call. Note the extra quoting around the _message_ and
the _null_ value.<br/>
`_ASSERT_EQUALS_ '"some message"' 'x' '""'`
1. Line numbers are not supported in all shells. If a shell does not support 1. Line numbers are not supported in all shells. If a shell does not support them, no errors will be thrown. Supported shells include: __bash__ (>=3.0), __ksh__, __pdksh__, and __zsh__.
them, no errors will be thrown. Supported shells include: __bash__ (>=3.0),
__ksh__, __mksh__, and __zsh__.
### <a name="test-skipping"></a> Test Skipping ### <a name="test-skipping"></a> Test Skipping
There are times where the test code you have written is just not applicable to There are times where the test code you have written is just not applicable to the system you are running on. This section describes how to skip these tests but maintain the total test count.
the system you are running on. This section describes how to skip these tests
but maintain the total test count.
Probably the easiest example would be shell code that is meant to run under the Probably the easiest example would be shell code that is meant to run under the __bash__ shell, but the unit test is running under the Bourne shell. There are things that just won't work. The following test code demonstrates two sample functions, one that will be run under any shell, and the another that will run only under the __bash__ shell.
__bash__ shell, but the unit test is running under the Bourne shell. There are
things that just won't work. The following test code demonstrates two sample
functions, one that will be run under any shell, and the another that will run
only under the __bash__ shell.
_**Example** -- math include_ _**Example** -- math include_
```sh ```sh
@ -547,11 +371,10 @@ oneTimeSetUp() {
} }
# Load and run shUnit2. # Load and run shUnit2.
. ../shunit2 . ./shunit2
``` ```
Running the above test under the __bash__ shell will result in the following Running the above test under the __bash__ shell will result in the following output.
output.
```console ```console
$ /bin/bash math_test.sh $ /bin/bash math_test.sh
@ -562,8 +385,7 @@ Ran 1 test.
OK OK
``` ```
But, running the test under any other Unix shell will result in the following But, running the test under any other Unix shell will result in the following output.
output.
```console ```console
$ /bin/ksh math_test.sh $ /bin/ksh math_test.sh
@ -574,33 +396,9 @@ Ran 1 test.
OK (skipped=1) OK (skipped=1)
``` ```
As you can see, the total number of tests has not changed, but the report As you can see, the total number of tests has not changed, but the report indicates that some tests were skipped.
indicates that some tests were skipped.
Skipping can be controlled with the following functions: `startSkipping()`, Skipping can be controlled with the following functions: `startSkipping()`, `endSkipping()`, and `isSkipping()`. Once skipping is enabled, it will remain enabled until the end of the current test function call, after which skipping is disabled.
`endSkipping()`, and `isSkipping()`. Once skipping is enabled, it will remain
enabled until the end of the current test function call, after which skipping is
disabled.
### <a name="cmd-line-args"></a> Running specific tests from the command line.
When running a test script, you may override the default set of tests, or the suite-specified set of tests, by providing additional arguments on the command line. Each additional argument after the `--` marker is assumed to be the name of a test function to be run in the order specified. e.g.
```console
test-script.sh -- testOne testTwo otherFunction
```
or
```console
shunit2 test-script.sh testOne testTwo otherFunction
```
In either case, three functions will be run as tests, `testOne`, `testTwo`, and `otherFunction`. Note that the function `otherFunction` would not normally be run by `shunit2` as part of the implicit collection of tests as it's function name does not match the test function name pattern `test*`.
If a specified test function does not exist, `shunit2` will still attempt to run that function and thereby cause a failure which `shunit2` will catch and mark as a failed test. All other tests will run normally.
The specification of tests does not affect how `shunit2` looks for and executes the setup and tear down functions, which will still run as expected.
--- ---
@ -608,32 +406,25 @@ The specification of tests does not affect how `shunit2` looks for and executes
### <a name="getting-help"></a> Getting Help ### <a name="getting-help"></a> Getting Help
For help, please send requests to either the shunit2-users@forestent.com mailing For help, please send requests to either the shunit2-users@googlegroups.com mailing list (archives available on the web at http://groups.google.com/group/shunit2-users) or directly to Kate Ward <kate dot ward at forestent dot com>.
list (archives available on the web at
https://groups.google.com/a/forestent.com/forum/#!forum/shunit2-users) or
directly to Kate Ward <kate dot ward at forestent dot com>.
### <a name="zsh"></a> Zsh ### <a name="zsh"></a> Zsh
For compatibility with Zsh, there is one requirement that must be met -- the For compatibility with Zsh, there is one requirement that must be met -- the `shwordsplit` option must be set. There are three ways to accomplish this.
`shwordsplit` option must be set. There are three ways to accomplish this.
1. In the unit-test script, add the following shell code snippet before sourcing 1. In the unit-test script, add the following shell code snippet before sourcing the `shunit2` library.
the `shunit2` library.
```sh ```sh
setopt shwordsplit setopt shwordsplit
``` ```
2. When invoking __zsh__ from either the command-line or as a script with `#!`, 1. When invoking __zsh__ from either the command-line or as a script with `#!`, add the `-y` parameter.
add the `-y` parameter.
```sh ```sh
#! /bin/zsh -y #! /bin/zsh -y
``` ```
3. When invoking __zsh__ from the command-line, add `-o shwordsplit --` as 1. When invoking __zsh__ from the command-line, add `-o shwordsplit --` as parameters before the script name.
parameters before the script name.
```console ```console
$ zsh -o shwordsplit -- some_script $ zsh -o shwordsplit -- some_script

View File

@ -1,47 +0,0 @@
#! /bin/sh
#
# Initialize the local git hooks this repository.
# https://git-scm.com/docs/githooks
topLevel=$(git rev-parse --show-toplevel)
if ! cd "${topLevel}"; then
echo "filed to cd into topLevel directory '${topLevel}'"
exit 1
fi
hooksDir="${topLevel}/.githooks"
if ! hooksPath=$(git config core.hooksPath); then
hooksPath="${topLevel}/.git/hooks"
fi
src="${hooksDir}/generic"
echo "linking hooks..."
for hook in \
applypatch-msg \
pre-applypatch \
post-applypatch \
pre-commit \
pre-merge-commit \
prepare-commit-msg \
commit-msg \
post-commit \
pre-rebase \
post-checkout \
post-merge \
pre-push \
pre-receive \
update \
post-receive \
post-update \
push-to-checkout \
pre-auto-gc \
post-rewrite \
sendemail-validate \
fsmonitor-watchman \
p4-pre-submit \
post-index-change
do
echo " ${hook}"
dest="${hooksPath}/${hook}"
ln -sf "${src}" "${dest}"
done

View File

@ -3,7 +3,7 @@
# #
# Versions determines the versions of all installed shells. # Versions determines the versions of all installed shells.
# #
# Copyright 2008-2020 Kate Ward. All Rights Reserved. # Copyright 2008-2018 Kate Ward. All Rights Reserved.
# Released under the Apache 2.0 License. # Released under the Apache 2.0 License.
# #
# Author: kate.ward@forestent.com (Kate Ward) # Author: kate.ward@forestent.com (Kate Ward)
@ -18,7 +18,7 @@
ARGV0=`basename "$0"` ARGV0=`basename "$0"`
LSB_RELEASE='/etc/lsb-release' LSB_RELEASE='/etc/lsb-release'
VERSIONS_SHELLS='ash /bin/bash /bin/dash /bin/ksh /bin/mksh /bin/pdksh /bin/zsh /usr/xpg4/bin/sh /bin/sh /sbin/sh' VERSIONS_SHELLS='ash /bin/bash /bin/dash /bin/ksh /bin/pdksh /bin/zsh /bin/sh /usr/xpg4/bin/sh /sbin/sh'
true; TRUE=$? true; TRUE=$?
false; FALSE=$? false; FALSE=$?
@ -49,10 +49,6 @@ versions_osName() {
10.11|10.11.[0-9]*) os_name_='Mac OS X El Capitan' ;; 10.11|10.11.[0-9]*) os_name_='Mac OS X El Capitan' ;;
10.12|10.12.[0-9]*) os_name_='macOS Sierra' ;; 10.12|10.12.[0-9]*) os_name_='macOS Sierra' ;;
10.13|10.13.[0-9]*) os_name_='macOS High Sierra' ;; 10.13|10.13.[0-9]*) os_name_='macOS High Sierra' ;;
10.14|10.14.[0-9]*) os_name_='macOS Mojave' ;;
10.15|10.15.[0-9]*) os_name_='macOS Catalina' ;;
11.*) os_name_='macOS Big Sur' ;;
12.*) os_name_='macOS Monterey' ;;
*) os_name_='macOS' ;; *) os_name_='macOS' ;;
esac esac
;; ;;
@ -137,11 +133,10 @@ versions_shellVersion() {
version_='' version_=''
case ${shell_} in case ${shell_} in
# SunOS shells. /sbin/sh) ;; # SunOS
/sbin/sh) ;; /usr/xpg4/bin/sh)
/usr/xpg4/bin/sh) version_=`versions_shell_xpg4 "${shell_}"` ;; version_=`versions_shell_xpg4 "${shell_}"`
;; # SunOS
# Generic shell.
*/sh) */sh)
# This could be one of any number of shells. Try until one fits. # This could be one of any number of shells. Try until one fits.
version_='' version_=''
@ -152,22 +147,16 @@ versions_shellVersion() {
[ -z "${version_}" ] && version_=`versions_shell_xpg4 "${shell_}"` [ -z "${version_}" ] && version_=`versions_shell_xpg4 "${shell_}"`
[ -z "${version_}" ] && version_=`versions_shell_zsh "${shell_}"` [ -z "${version_}" ] && version_=`versions_shell_zsh "${shell_}"`
;; ;;
# Specific shells.
ash) version_=`versions_shell_ash "${shell_}"` ;; ash) version_=`versions_shell_ash "${shell_}"` ;;
# bash - Bourne Again SHell (https://www.gnu.org/software/bash/)
*/bash) version_=`versions_shell_bash "${shell_}"` ;; */bash) version_=`versions_shell_bash "${shell_}"` ;;
*/dash) version_=`versions_shell_dash` ;; */dash)
# ksh - KornShell (http://www.kornshell.com/) # Assuming Ubuntu Linux until somebody comes up with a better test. The
# following test will return an empty string if dash is not installed.
version_=`versions_shell_dash`
;;
*/ksh) version_=`versions_shell_ksh "${shell_}"` ;; */ksh) version_=`versions_shell_ksh "${shell_}"` ;;
# mksh - MirBSD Korn Shell (http://www.mirbsd.org/mksh.htm)
*/mksh) version_=`versions_shell_ksh "${shell_}"` ;;
# pdksh - Public Domain Korn Shell (http://web.cs.mun.ca/~michael/pdksh/)
*/pdksh) version_=`versions_shell_pdksh "${shell_}"` ;; */pdksh) version_=`versions_shell_pdksh "${shell_}"` ;;
# zsh (https://www.zsh.org/)
*/zsh) version_=`versions_shell_zsh "${shell_}"` ;; */zsh) version_=`versions_shell_zsh "${shell_}"` ;;
# Unrecognized shell.
*) version_='invalid' *) version_='invalid'
esac esac
@ -184,8 +173,6 @@ versions_shell_bash() {
$1 --version : 2>&1 |grep 'GNU bash' |sed 's/.*version \([^ ]*\).*/\1/' $1 --version : 2>&1 |grep 'GNU bash' |sed 's/.*version \([^ ]*\).*/\1/'
} }
# Assuming Ubuntu Linux until somebody comes up with a better test. The
# following test will return an empty string if dash is not installed.
versions_shell_dash() { versions_shell_dash() {
eval dpkg >/dev/null 2>&1 eval dpkg >/dev/null 2>&1
[ $? -eq 127 ] && return # Return if dpkg not found. [ $? -eq 127 ] && return # Return if dpkg not found.
@ -206,10 +193,6 @@ versions_shell_ksh() {
else else
versions_version_='' versions_version_=''
fi fi
if [ -z "${versions_version_}" ]; then
# shellcheck disable=SC2016
versions_version_=`${versions_shell_} -c 'echo ${KSH_VERSION}'`
fi
if [ -z "${versions_version_}" ]; then if [ -z "${versions_version_}" ]; then
_versions_have_strings _versions_have_strings
versions_version_=`strings "${versions_shell_}" 2>&1 \ versions_version_=`strings "${versions_shell_}" 2>&1 \
@ -224,14 +207,6 @@ versions_shell_ksh() {
unset versions_shell_ versions_version_ unset versions_shell_ versions_version_
} }
# mksh - MirBSD Korn Shell (http://www.mirbsd.org/mksh.htm)
# mksh is a successor to pdksh (Public Domain Korn Shell).
versions_shell_mksh() {
versions_shell_ksh
}
# pdksh - Public Domain Korn Shell
# pdksh is an obsolete shell, which was replaced by mksh (among others).
versions_shell_pdksh() { versions_shell_pdksh() {
_versions_have_strings _versions_have_strings
strings "$1" 2>&1 \ strings "$1" 2>&1 \

File diff suppressed because it is too large Load Diff

View File

@ -1,64 +0,0 @@
#!/bin/sh
# vim:et:ft=sh:sts=2:sw=2
#
# shunit2 unit test for running subset(s) of tests based upon command line args.
#
# Copyright 2008-2021 Kate Ward. All Rights Reserved.
# Released under the Apache 2.0 license.
# http://www.apache.org/licenses/LICENSE-2.0
#
# https://github.com/kward/shunit2
#
# Also shows how non-default tests or a arbitrary subset of tests can be run.
#
# Disable source following.
# shellcheck disable=SC1090,SC1091
# Load test helpers.
. ./shunit2_test_helpers
CUSTOM_TEST_RAN=''
# This test does not normally run because it does not begin "test*". Will be
# run by setting the arguments to the script to include the name of this test.
custom_test() {
# Arbitrary assert.
assertTrue 0
# The true intent is to set this variable, which will be tested below.
CUSTOM_TEST_RAN='yup, we ran'
}
# Verify that `customTest()` ran.
testCustomTestRan() {
assertNotNull "'custom_test()' did not run" "${CUSTOM_TEST_RAN}"
}
# Fail if this test runs, which is shouldn't if arguments are set correctly.
testShouldFail() {
fail 'testShouldFail should not be run if argument parsing works'
}
oneTimeSetUp() {
th_oneTimeSetUp
}
# If zero/one argument(s) are provided, this test is being run in it's
# entirety, and therefore we want to set the arguments to the script to
# (simulate and) test the processing of command-line specified tests. If we
# don't, then the "test_will_fail" test will run (by default) and the overall
# test will fail.
#
# However, if two or more arguments are provided, then assume this test script
# is being run by hand to experiment with command-line test specification, and
# then don't override the user provided arguments.
if [ "$#" -le 1 ]; then
# We set the arguments in a POSIX way, inasmuch as we can;
# helpful tip:
# https://unix.stackexchange.com/questions/258512/how-to-remove-a-positional-parameter-from
set -- '--' 'custom_test' 'testCustomTestRan'
fi
# Load and run shunit2.
# shellcheck disable=SC2034
[ -n "${ZSH_VERSION:-}" ] && SHUNIT_PARENT=$0
. "${TH_SHUNIT}"

View File

@ -3,16 +3,12 @@
# #
# shunit2 unit test for assert functions. # shunit2 unit test for assert functions.
# #
# Copyright 2008-2021 Kate Ward. All Rights Reserved. # Copyright 2008-2017 Kate Ward. All Rights Reserved.
# Released under the Apache 2.0 license. # Released under the Apache 2.0 license.
# http://www.apache.org/licenses/LICENSE-2.0
# #
# Author: kate.ward@forestent.com (Kate Ward) # Author: kate.ward@forestent.com (Kate Ward)
# https://github.com/kward/shunit2 # https://github.com/kward/shunit2
# #
# In this file, all assert calls under test must be wrapped in () so they do not
# influence the metrics of the test itself.
#
# Disable source following. # Disable source following.
# shellcheck disable=SC1090,SC1091 # shellcheck disable=SC1090,SC1091
@ -26,376 +22,174 @@ stderrF="${TMPDIR:-/tmp}/STDERR"
commonEqualsSame() { commonEqualsSame() {
fn=$1 fn=$1
# These should succeed. ( ${fn} 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
th_assertTrueWithNoOutput 'equal' $? "${stdoutF}" "${stderrF}"
desc='equal' ( ${fn} "${MSG}" 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
if (${fn} 'x' 'x' >"${stdoutF}" 2>"${stderrF}"); then th_assertTrueWithNoOutput 'equal; with msg' $? "${stdoutF}" "${stderrF}"
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
desc='equal_with_message' ( ${fn} 'abc def' 'abc def' >"${stdoutF}" 2>"${stderrF}" )
if (${fn} 'some message' 'x' 'x' >"${stdoutF}" 2>"${stderrF}"); then th_assertTrueWithNoOutput 'equal with spaces' $? "${stdoutF}" "${stderrF}"
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
desc='equal_with_spaces' ( ${fn} 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
if (${fn} 'abc def' 'abc def' >"${stdoutF}" 2>"${stderrF}"); then th_assertFalseWithOutput 'not equal' $? "${stdoutF}" "${stderrF}"
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
desc='equal_null_values' ( ${fn} '' '' >"${stdoutF}" 2>"${stderrF}" )
if (${fn} '' '' >"${stdoutF}" 2>"${stderrF}"); then th_assertTrueWithNoOutput 'null values' $? "${stdoutF}" "${stderrF}"
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
# These should fail. ( ${fn} arg1 >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithError 'too few arguments' $? "${stdoutF}" "${stderrF}"
desc='not_equal' ( ${fn} arg1 arg2 arg3 arg4 >"${stdoutF}" 2>"${stderrF}" )
if (${fn} 'x' 'y' >"${stdoutF}" 2>"${stderrF}"); then th_assertFalseWithError 'too many arguments' $? "${stdoutF}" "${stderrF}"
fail "${desc}: expected a failure"
_showTestOutput
else
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
fi
} }
commonNotEqualsSame() { commonNotEqualsSame() {
fn=$1 fn=$1
# These should succeed. ( ${fn} 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
th_assertTrueWithNoOutput 'not same' $? "${stdoutF}" "${stderrF}"
desc='not_same' ( ${fn} "${MSG}" 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
if (${fn} 'x' 'y' >"${stdoutF}" 2>"${stderrF}"); then th_assertTrueWithNoOutput 'not same, with msg' $? "${stdoutF}" "${stderrF}"
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
desc='not_same_with_message' ( ${fn} 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
if (${fn} 'some message' 'x' 'y' >"${stdoutF}" 2>"${stderrF}"); then th_assertFalseWithOutput 'same' $? "${stdoutF}" "${stderrF}"
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
# These should fail. ( ${fn} '' '' >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'null values' $? "${stdoutF}" "${stderrF}"
desc='same' ( ${fn} arg1 >"${stdoutF}" 2>"${stderrF}" )
if (${fn} 'x' 'x' >"${stdoutF}" 2>"${stderrF}"); then th_assertFalseWithError 'too few arguments' $? "${stdoutF}" "${stderrF}"
fail "${desc}: expected a failure"
_showTestOutput
else
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
fi
desc='unequal_null_values' ( ${fn} arg1 arg2 arg3 arg4 >"${stdoutF}" 2>"${stderrF}" )
if (${fn} '' '' >"${stdoutF}" 2>"${stderrF}"); then th_assertFalseWithError 'too many arguments' $? "${stdoutF}" "${stderrF}"
fail "${desc}: expected a failure"
_showTestOutput
else
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
fi
} }
testAssertEquals() { commonEqualsSame 'assertEquals'; } testAssertEquals() {
testAssertNotEquals() { commonNotEqualsSame 'assertNotEquals'; } commonEqualsSame 'assertEquals'
testAssertSame() { commonEqualsSame 'assertSame'; }
testAssertNotSame() { commonNotEqualsSame 'assertNotSame'; }
testAssertContains() {
# Content is present.
while read -r desc container content; do
if (assertContains "${container}" "${content}" >"${stdoutF}" 2>"${stderrF}"); then
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
done <<EOF
abc_at_start abcdef abc
bcd_in_middle abcdef bcd
def_at_end abcdef def
EOF
# Content missing.
while read -r desc container content; do
if (assertContains "${container}" "${content}" >"${stdoutF}" 2>"${stderrF}"); then
fail "${desc}: unexpected failure"
_showTestOutput
else
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
fi
done <<EOF
xyz_not_present abcdef xyz
zab_contains_start abcdef zab
efg_contains_end abcdef efg
acf_has_parts abcdef acf
EOF
desc="content_starts_with_dash"
if (assertContains 'abc -Xabc def' '-Xabc' >"${stdoutF}" 2>"${stderrF}"); then
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
desc="contains_with_message"
if (assertContains 'some message' 'abcdef' 'abc' >"${stdoutF}" 2>"${stderrF}"); then
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
} }
testAssertNotContains() { testAssertNotEquals() {
# Content not present. commonNotEqualsSame 'assertNotEquals'
while read -r desc container content; do }
if (assertNotContains "${container}" "${content}" >"${stdoutF}" 2>"${stderrF}"); then
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
done <<EOF
xyz_not_present abcdef xyz
zab_contains_start abcdef zab
efg_contains_end abcdef efg
acf_has_parts abcdef acf
EOF
# Content present. testAssertSame() {
while read -r desc container content; do commonEqualsSame 'assertSame'
if (assertNotContains "${container}" "${content}" >"${stdoutF}" 2>"${stderrF}"); then }
fail "${desc}: expected a failure"
_showTestOutput
else
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
fi
done <<EOF
abc_is_present abcdef abc
EOF
desc='not_contains_with_message' testAssertNotSame() {
if (assertNotContains 'some message' 'abcdef' 'xyz' >"${stdoutF}" 2>"${stderrF}"); then commonNotEqualsSame 'assertNotSame'
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
} }
testAssertNull() { testAssertNull() {
while read -r desc value; do ( assertNull '' >"${stdoutF}" 2>"${stderrF}" )
if (assertNull "${value}" >"${stdoutF}" 2>"${stderrF}"); then th_assertTrueWithNoOutput 'null' $? "${stdoutF}" "${stderrF}"
fail "${desc}: unexpected failure"
_showTestOutput
else
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
fi
done <<'EOF'
x_alone x
x_double_quote_a x"a
x_single_quote_a x'a
x_dollar_a x$a
x_backtick_a x`a
EOF
desc='null_without_message' ( assertNull "${MSG}" '' >"${stdoutF}" 2>"${stderrF}" )
if (assertNull '' >"${stdoutF}" 2>"${stderrF}"); then th_assertTrueWithNoOutput 'null, with msg' $? "${stdoutF}" "${stderrF}"
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
desc='null_with_message' ( assertNull 'x' >"${stdoutF}" 2>"${stderrF}" )
if (assertNull 'some message' '' >"${stdoutF}" 2>"${stderrF}"); then th_assertFalseWithOutput 'not null' $? "${stdoutF}" "${stderrF}"
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
desc='x_is_not_null' ( assertNull >"${stdoutF}" 2>"${stderrF}" )
if (assertNull 'x' >"${stdoutF}" 2>"${stderrF}"); then th_assertFalseWithError 'too few arguments' $? "${stdoutF}" "${stderrF}"
fail "${desc}: expected a failure"
_showTestOutput ( assertNull arg1 arg2 arg3 >"${stdoutF}" 2>"${stderrF}" )
else th_assertFalseWithError 'too many arguments' $? "${stdoutF}" "${stderrF}"
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
fi
} }
testAssertNotNull() { testAssertNotNull()
while read -r desc value; do {
if (assertNotNull "${value}" >"${stdoutF}" 2>"${stderrF}"); then ( assertNotNull 'x' >"${stdoutF}" 2>"${stderrF}" )
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}" th_assertTrueWithNoOutput 'not null' $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
done <<'EOF'
x_alone x
x_double_quote_b x"b
x_single_quote_b x'b
x_dollar_b x$b
x_backtick_b x`b
EOF
desc='not_null_with_message' ( assertNotNull "${MSG}" 'x' >"${stdoutF}" 2>"${stderrF}" )
if (assertNotNull 'some message' 'x' >"${stdoutF}" 2>"${stderrF}"); then th_assertTrueWithNoOutput 'not null, with msg' $? "${stdoutF}" "${stderrF}"
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
desc="double_ticks_are_null" ( assertNotNull 'x"b' >"${stdoutF}" 2>"${stderrF}" )
if (assertNotNull '' >"${stdoutF}" 2>"${stderrF}"); then th_assertTrueWithNoOutput 'not null, with double-quote' $? \
fail "${desc}: expected a failure" "${stdoutF}" "${stderrF}"
_showTestOutput
else ( assertNotNull "x'b" >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}" th_assertTrueWithNoOutput 'not null, with single-quote' $? \
fi "${stdoutF}" "${stderrF}"
# shellcheck disable=SC2016
( assertNotNull 'x$b' >"${stdoutF}" 2>"${stderrF}" )
th_assertTrueWithNoOutput 'not null, with dollar' $? \
"${stdoutF}" "${stderrF}"
( assertNotNull 'x`b' >"${stdoutF}" 2>"${stderrF}" )
th_assertTrueWithNoOutput 'not null, with backtick' $? \
"${stdoutF}" "${stderrF}"
( assertNotNull '' >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'null' $? "${stdoutF}" "${stderrF}"
# There is no test for too few arguments as $1 might actually be null.
( assertNotNull arg1 arg2 arg3 >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithError 'too many arguments' $? "${stdoutF}" "${stderrF}"
} }
testAssertTrue() { testAssertTrue() {
# True values. ( assertTrue 0 >"${stdoutF}" 2>"${stderrF}" )
while read -r desc value; do th_assertTrueWithNoOutput 'true' $? "${stdoutF}" "${stderrF}"
if (assertTrue "${value}" >"${stdoutF}" 2>"${stderrF}"); then
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
done <<'EOF'
zero 0
zero_eq_zero [ 0 -eq 0 ]
EOF
# Not true values. ( assertTrue "${MSG}" 0 >"${stdoutF}" 2>"${stderrF}" )
while read -r desc value; do th_assertTrueWithNoOutput 'true, with msg' $? "${stdoutF}" "${stderrF}"
if (assertTrue "${value}" >"${stdoutF}" 2>"${stderrF}"); then
fail "${desc}: expected a failure"
_showTestOutput
else
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
fi
done <<EOF
one 1
zero_eq_1 [ 0 -eq 1 ]
null
EOF
desc='true_with_message' ( assertTrue '[ 0 -eq 0 ]' >"${stdoutF}" 2>"${stderrF}" )
if (assertTrue 'some message' 0 >"${stdoutF}" 2>"${stderrF}"); then th_assertTrueWithNoOutput 'true condition' $? "${stdoutF}" "${stderrF}"
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else ( assertTrue 1 >"${stdoutF}" 2>"${stderrF}" )
fail "${desc}: unexpected failure" th_assertFalseWithOutput 'false' $? "${stdoutF}" "${stderrF}"
_showTestOutput
fi ( assertTrue '[ 0 -eq 1 ]' >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'false condition' $? "${stdoutF}" "${stderrF}"
( assertTrue '' >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithOutput 'null' $? "${stdoutF}" "${stderrF}"
( assertTrue >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithError 'too few arguments' $? "${stdoutF}" "${stderrF}"
( assertTrue arg1 arg2 arg3 >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithError 'too many arguments' $? "${stdoutF}" "${stderrF}"
} }
testAssertFalse() { testAssertFalse() {
# False values. ( assertFalse 1 >"${stdoutF}" 2>"${stderrF}" )
while read -r desc value; do th_assertTrueWithNoOutput 'false' $? "${stdoutF}" "${stderrF}"
if (assertFalse "${value}" >"${stdoutF}" 2>"${stderrF}"); then
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
done <<EOF
one 1
zero_eq_1 [ 0 -eq 1 ]
null
EOF
# Not true values. ( assertFalse "${MSG}" 1 >"${stdoutF}" 2>"${stderrF}" )
while read -r desc value; do th_assertTrueWithNoOutput 'false, with msg' $? "${stdoutF}" "${stderrF}"
if (assertFalse "${value}" >"${stdoutF}" 2>"${stderrF}"); then
fail "${desc}: expected a failure"
_showTestOutput
else
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
fi
done <<'EOF'
zero 0
zero_eq_zero [ 0 -eq 0 ]
EOF
desc='false_with_message' ( assertFalse '[ 0 -eq 1 ]' >"${stdoutF}" 2>"${stderrF}" )
if (assertFalse 'some message' 1 >"${stdoutF}" 2>"${stderrF}"); then th_assertTrueWithNoOutput 'false condition' $? "${stdoutF}" "${stderrF}"
th_assertTrueWithNoOutput "${desc}" $? "${stdoutF}" "${stderrF}"
else
fail "${desc}: unexpected failure"
_showTestOutput
fi
}
FUNCTIONS=' ( assertFalse 0 >"${stdoutF}" 2>"${stderrF}" )
assertEquals assertNotEquals th_assertFalseWithOutput 'true' $? "${stdoutF}" "${stderrF}"
assertSame assertNotSame
assertContains assertNotContains
assertNull assertNotNull
assertTrue assertFalse
'
testTooFewArguments() { ( assertFalse '[ 0 -eq 0 ]' >"${stdoutF}" 2>"${stderrF}" )
for fn in ${FUNCTIONS}; do th_assertFalseWithOutput 'true condition' $? "${stdoutF}" "${stderrF}"
# These functions support zero arguments.
case "${fn}" in
assertNull) continue ;;
assertNotNull) continue ;;
esac
desc="${fn}" ( assertFalse '' >"${stdoutF}" 2>"${stderrF}" )
if (${fn} >"${stdoutF}" 2>"${stderrF}"); then th_assertFalseWithOutput 'true condition' $? "${stdoutF}" "${stderrF}"
fail "${desc}: expected a failure"
_showTestOutput
else
got=$? want=${SHUNIT_ERROR}
assertEquals "${desc}: incorrect return code" "${got}" "${want}"
th_assertFalseWithError "${desc}" "${got}" "${stdoutF}" "${stderrF}"
fi
done
}
testTooManyArguments() { ( assertFalse >"${stdoutF}" 2>"${stderrF}" )
for fn in ${FUNCTIONS}; do th_assertFalseWithError 'too few arguments' $? "${stdoutF}" "${stderrF}"
desc="${fn}"
if (${fn} arg1 arg2 arg3 arg4 >"${stdoutF}" 2>"${stderrF}"); then ( assertFalse arg1 arg2 arg3 >"${stdoutF}" 2>"${stderrF}" )
fail "${desc}: expected a failure" th_assertFalseWithError 'too many arguments' $? "${stdoutF}" "${stderrF}"
_showTestOutput
else
got=$? want=${SHUNIT_ERROR}
assertEquals "${desc}: incorrect return code" "${got}" "${want}"
th_assertFalseWithError "${desc}" "${got}" "${stdoutF}" "${stderrF}"
fi
done
} }
oneTimeSetUp() { oneTimeSetUp() {
th_oneTimeSetUp th_oneTimeSetUp
}
# showTestOutput for the most recently run test. MSG='This is a test message'
_showTestOutput() { th_showOutput "${SHUNIT_FALSE}" "${stdoutF}" "${stderrF}"; } }
# Load and run shunit2. # Load and run shunit2.
# shellcheck disable=SC2034 # shellcheck disable=SC2034

View File

@ -1,11 +1,10 @@
#! /bin/sh #! /bin/sh
# vim:et:ft=sh:sts=2:sw=2 # vim:et:ft=sh:sts=2:sw=2
# #
# shUnit2 unit test for failure functions. These functions do not test values. # shUnit2 unit test for failure functions
# #
# Copyright 2008-2021 Kate Ward. All Rights Reserved. # Copyright 2008-2017 Kate Ward. All Rights Reserved.
# Released under the Apache 2.0 license. # Released under the LGPL (GNU Lesser General Public License)
# http://www.apache.org/licenses/LICENSE-2.0
# #
# Author: kate.ward@forestent.com (Kate Ward) # Author: kate.ward@forestent.com (Kate Ward)
# https://github.com/kward/shunit2 # https://github.com/kward/shunit2
@ -21,114 +20,60 @@ stderrF="${TMPDIR:-/tmp}/STDERR"
. ./shunit2_test_helpers . ./shunit2_test_helpers
testFail() { testFail() {
# Test without a message. ( fail >"${stdoutF}" 2>"${stderrF}" )
desc='fail_without_message' th_assertFalseWithOutput 'fail' $? "${stdoutF}" "${stderrF}"
if ( fail >"${stdoutF}" 2>"${stderrF}" ); then
fail "${desc}: expected a failure"
th_showOutput
else
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
fi
# Test with a message. ( fail "${MSG}" >"${stdoutF}" 2>"${stderrF}" )
desc='fail_with_message' th_assertFalseWithOutput 'fail with msg' $? "${stdoutF}" "${stderrF}"
if ( fail 'some message' >"${stdoutF}" 2>"${stderrF}" ); then
fail "${desc}: expected a failure" ( fail arg1 >"${stdoutF}" 2>"${stderrF}" )
th_showOutput th_assertFalseWithOutput 'too many arguments' $? "${stdoutF}" "${stderrF}"
else
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
fi
} }
# FN_TESTS hold all the functions to be tested. testFailNotEquals() {
# shellcheck disable=SC2006 ( failNotEquals 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
FN_TESTS=` th_assertFalseWithOutput 'same' $? "${stdoutF}" "${stderrF}"
# fn num_args pattern
cat <<EOF
fail 1
failNotEquals 3 but was:
failFound 2 found:
failNotFound 2 not found:
failSame 3 not same
failNotSame 3 but was:
EOF
`
testFailsWithArgs() { ( failNotEquals "${MSG}" 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
echo "${FN_TESTS}" |\ th_assertFalseWithOutput 'same with msg' $? "${stdoutF}" "${stderrF}"
while read -r fn num_args pattern; do
case "${fn}" in
fail) continue ;;
esac
# Test without a message. ( failNotEquals 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
desc="${fn}_without_message" th_assertFalseWithOutput 'not same' $? "${stdoutF}" "${stderrF}"
if ( ${fn} arg1 arg2 >"${stdoutF}" 2>"${stderrF}" ); then
fail "${desc}: expected a failure"
th_showOutput
else
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
fi
# Test with a message. ( failNotEquals '' '' >"${stdoutF}" 2>"${stderrF}" )
arg1='' arg2='' th_assertFalseWithOutput 'null values' $? "${stdoutF}" "${stderrF}"
case ${num_args} in
1) ;;
2) arg1='arg1' ;;
3) arg1='arg1' arg2='arg2' ;;
esac
desc="${fn}_with_message" ( failNotEquals >"${stdoutF}" 2>"${stderrF}" )
if ( ${fn} 'some message' ${arg1} ${arg2} >"${stdoutF}" 2>"${stderrF}" ); then th_assertFalseWithError 'too few arguments' $? "${stdoutF}" "${stderrF}"
fail "${desc}: expected a failure"
th_showOutput ( failNotEquals arg1 arg2 arg3 arg4 >"${stdoutF}" 2>"${stderrF}" )
else th_assertFalseWithError 'too many arguments' $? "${stdoutF}" "${stderrF}"
th_assertFalseWithOutput "${desc}" $? "${stdoutF}" "${stderrF}"
if ! grep -- "${pattern}" "${stdoutF}" >/dev/null; then
fail "${desc}: incorrect message to STDOUT"
th_showOutput
fi
fi
done
} }
testTooFewArguments() { testFailSame() {
echo "${FN_TESTS}" \ ( failSame 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
|while read -r fn num_args pattern; do th_assertFalseWithOutput 'same' $? "${stdoutF}" "${stderrF}"
# Skip functions that support a single message argument.
if [ "${num_args}" -eq 1 ]; then
continue
fi
desc="${fn}" ( failSame "${MSG}" 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
if (${fn} >"${stdoutF}" 2>"${stderrF}"); then th_assertFalseWithOutput 'same with msg' $? "${stdoutF}" "${stderrF}"
fail "${desc}: expected a failure"
_showTestOutput
else
got=$? want=${SHUNIT_ERROR}
assertEquals "${desc}: incorrect return code" "${got}" "${want}"
th_assertFalseWithError "${desc}" "${got}" "${stdoutF}" "${stderrF}"
fi
done
}
testTooManyArguments() { ( failSame 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
echo "${FN_TESTS}" \ th_assertFalseWithOutput 'not same' $? "${stdoutF}" "${stderrF}"
|while read -r fn num_args pattern; do
desc="${fn}" ( failSame '' '' >"${stdoutF}" 2>"${stderrF}" )
if (${fn} arg1 arg2 arg3 arg4 >"${stdoutF}" 2>"${stderrF}"); then th_assertFalseWithOutput 'null values' $? "${stdoutF}" "${stderrF}"
fail "${desc}: expected a failure"
_showTestOutput ( failSame >"${stdoutF}" 2>"${stderrF}" )
else th_assertFalseWithError 'too few arguments' $? "${stdoutF}" "${stderrF}"
got=$? want=${SHUNIT_ERROR}
assertEquals "${desc}: incorrect return code" "${got}" "${want}" ( failSame arg1 arg2 arg3 arg4 >"${stdoutF}" 2>"${stderrF}" )
th_assertFalseWithError "${desc}" "${got}" "${stdoutF}" "${stderrF}" th_assertFalseWithError 'too many arguments' $? "${stdoutF}" "${stderrF}"
fi
done
} }
oneTimeSetUp() { oneTimeSetUp() {
th_oneTimeSetUp th_oneTimeSetUp
MSG='This is a test message'
} }
# Load and run shUnit2. # Load and run shUnit2.

View File

@ -1,99 +0,0 @@
#! /bin/sh
# vim:et:ft=sh:sts=2:sw=2
#
# shUnit2 unit tests for general commands.
#
# Copyright 2008-2021 Kate Ward. All Rights Reserved.
# Released under the Apache 2.0 license.
# http://www.apache.org/licenses/LICENSE-2.0
#
# Author: kate.ward@forestent.com (Kate Ward)
# https://github.com/kward/shunit2
#
# Disable source following.
# shellcheck disable=SC1090,SC1091
# These variables will be overridden by the test helpers.
stdoutF="${TMPDIR:-/tmp}/STDOUT"
stderrF="${TMPDIR:-/tmp}/STDERR"
# Load test helpers.
. ./shunit2_test_helpers
testSkipping() {
# We shouldn't be skipping to start.
if isSkipping; then
th_error 'skipping *should not be* enabled'
return
fi
startSkipping
was_skipping_started=${SHUNIT_FALSE}
if isSkipping; then was_skipping_started=${SHUNIT_TRUE}; fi
endSkipping
was_skipping_ended=${SHUNIT_FALSE}
if isSkipping; then was_skipping_ended=${SHUNIT_TRUE}; fi
assertEquals "skipping wasn't started" "${was_skipping_started}" "${SHUNIT_TRUE}"
assertNotEquals "skipping wasn't ended" "${was_skipping_ended}" "${SHUNIT_TRUE}"
return 0
}
testStartSkippingWithMessage() {
unittestF="${SHUNIT_TMPDIR}/unittest"
sed 's/^#//' >"${unittestF}" <<\EOF
## Start skipping with a message.
#testSkipping() {
# startSkipping 'SKIP-a-Dee-Doo-Dah'
#}
#SHUNIT_COLOR='none'
#. ${TH_SHUNIT}
EOF
# Ignoring errors with `|| :` as we only care about `FAILED` in the output.
( exec "${SHELL:-sh}" "${unittestF}" >"${stdoutF}" 2>"${stderrF}" ) || :
if ! grep '\[skipping\] SKIP-a-Dee-Doo-Dah' "${stderrF}" >/dev/null; then
fail 'skipping message was not generated'
fi
return 0
}
testStartSkippingWithoutMessage() {
unittestF="${SHUNIT_TMPDIR}/unittest"
sed 's/^#//' >"${unittestF}" <<\EOF
## Start skipping with a message.
#testSkipping() {
# startSkipping
#}
#SHUNIT_COLOR='none'
#. ${TH_SHUNIT}
EOF
# Ignoring errors with `|| :` as we only care about `FAILED` in the output.
( exec "${SHELL:-sh}" "${unittestF}" >"${stdoutF}" 2>"${stderrF}" ) || :
if grep '\[skipping\]' "${stderrF}" >/dev/null; then
fail 'skipping message was unexpectedly generated'
fi
return 0
}
setUp() {
for f in "${stdoutF}" "${stderrF}"; do
cp /dev/null "${f}"
done
# Reconfigure coloring as some tests override default behavior.
_shunit_configureColor "${SHUNIT_COLOR_DEFAULT}"
# shellcheck disable=SC2034,SC2153
SHUNIT_CMD_TPUT=${__SHUNIT_CMD_TPUT}
}
oneTimeSetUp() {
SHUNIT_COLOR_DEFAULT="${SHUNIT_COLOR}"
th_oneTimeSetUp
}
# Load and run shUnit2.
# shellcheck disable=SC2034
[ -n "${ZSH_VERSION:-}" ] && SHUNIT_PARENT=$0
. "${TH_SHUNIT}"

View File

@ -3,15 +3,17 @@
# #
# shunit2 unit test for macros. # shunit2 unit test for macros.
# #
# Copyright 2008-2021 Kate Ward. All Rights Reserved. # Copyright 2008-2017 Kate Ward. All Rights Reserved.
# Released under the Apache 2.0 license. # Released under the Apache 2.0 license.
# http://www.apache.org/licenses/LICENSE-2.0
# #
# Author: kate.ward@forestent.com (Kate Ward) # Author: kate.ward@forestent.com (Kate Ward)
# https://github.com/kward/shunit2 # https://github.com/kward/shunit2
# #
### ShellCheck http://www.shellcheck.net/
# Disable source following. # Disable source following.
# shellcheck disable=SC1090,SC1091 # shellcheck disable=SC1090,SC1091
# Presence of LINENO variable is checked.
# shellcheck disable=SC2039
# These variables will be overridden by the test helpers. # These variables will be overridden by the test helpers.
stdoutF="${TMPDIR:-/tmp}/STDOUT" stdoutF="${TMPDIR:-/tmp}/STDOUT"
@ -21,223 +23,215 @@ stderrF="${TMPDIR:-/tmp}/STDERR"
. ./shunit2_test_helpers . ./shunit2_test_helpers
testAssertEquals() { testAssertEquals() {
isLinenoWorking || startSkipping # Start skipping if LINENO not available.
[ -z "${LINENO:-}" ] && startSkipping
( ${_ASSERT_EQUALS_} 'x' 'y' >"${stdoutF}" 2>"${stderrF}" ) ( ${_ASSERT_EQUALS_} 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
if ! wasAssertGenerated; then grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
fail '_ASSERT_EQUALS_ failed to produce an ASSERT message' rtrn=$?
showTestOutput assertTrue '_ASSERT_EQUALS_ failure' ${rtrn}
fi [ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
( ${_ASSERT_EQUALS_} '"some msg"' 'x' 'y' >"${stdoutF}" 2>"${stderrF}" ) ( ${_ASSERT_EQUALS_} '"some msg"' 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
if ! wasAssertGenerated; then grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
fail '_ASSERT_EQUALS_ (with a message) failed to produce an ASSERT message' rtrn=$?
showTestOutput assertTrue '_ASSERT_EQUALS_ w/ msg failure' ${rtrn}
fi [ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
} }
testAssertNotEquals() { testAssertNotEquals() {
isLinenoWorking || startSkipping # Start skipping if LINENO not available.
[ -z "${LINENO:-}" ] && startSkipping
( ${_ASSERT_NOT_EQUALS_} 'x' 'x' >"${stdoutF}" 2>"${stderrF}" ) ( ${_ASSERT_NOT_EQUALS_} 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
if ! wasAssertGenerated; then grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
fail '_ASSERT_NOT_EQUALS_ failed to produce an ASSERT message' rtrn=$?
showTestOutput assertTrue '_ASSERT_NOT_EQUALS_ failure' ${rtrn}
fi [ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
( ${_ASSERT_NOT_EQUALS_} '"some msg"' 'x' 'x' >"${stdoutF}" 2>"${stderrF}" ) ( ${_ASSERT_NOT_EQUALS_} '"some msg"' 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
if ! wasAssertGenerated; then grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
fail '_ASSERT_NOT_EQUALS_ (with a message) failed to produce an ASSERT message' rtrn=$?
showTestOutput assertTrue '_ASSERT_NOT_EQUALS_ w/ msg failure' ${rtrn}
fi [ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
} }
testSame() { testSame() {
isLinenoWorking || startSkipping # Start skipping if LINENO not available.
[ -z "${LINENO:-}" ] && startSkipping
( ${_ASSERT_SAME_} 'x' 'y' >"${stdoutF}" 2>"${stderrF}" ) ( ${_ASSERT_SAME_} 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
if ! wasAssertGenerated; then grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
fail '_ASSERT_SAME_ failed to produce an ASSERT message' rtrn=$?
showTestOutput assertTrue '_ASSERT_SAME_ failure' ${rtrn}
fi [ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
( ${_ASSERT_SAME_} '"some msg"' 'x' 'y' >"${stdoutF}" 2>"${stderrF}" ) ( ${_ASSERT_SAME_} '"some msg"' 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
if ! wasAssertGenerated; then grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
fail '_ASSERT_SAME_ (with a message) failed to produce an ASSERT message' rtrn=$?
showTestOutput assertTrue '_ASSERT_SAME_ w/ msg failure' ${rtrn}
fi [ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
} }
testNotSame() { testNotSame() {
isLinenoWorking || startSkipping # Start skipping if LINENO not available.
[ -z "${LINENO:-}" ] && startSkipping
( ${_ASSERT_NOT_SAME_} 'x' 'x' >"${stdoutF}" 2>"${stderrF}" ) ( ${_ASSERT_NOT_SAME_} 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
if ! wasAssertGenerated; then grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
fail '_ASSERT_NOT_SAME_ failed to produce an ASSERT message' rtrn=$?
showTestOutput assertTrue '_ASSERT_NOT_SAME_ failure' ${rtrn}
fi [ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
( ${_ASSERT_NOT_SAME_} '"some msg"' 'x' 'x' >"${stdoutF}" 2>"${stderrF}" ) ( ${_ASSERT_NOT_SAME_} '"some msg"' 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
if ! wasAssertGenerated; then grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
fail '_ASSERT_NOT_SAME_ (with a message) failed to produce an ASSERT message' rtrn=$?
showTestOutput assertTrue '_ASSERT_NOT_SAME_ w/ msg failure' ${rtrn}
fi [ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
} }
testNull() { testNull() {
isLinenoWorking || startSkipping # Start skipping if LINENO not available.
[ -z "${LINENO:-}" ] && startSkipping
( ${_ASSERT_NULL_} 'x' >"${stdoutF}" 2>"${stderrF}" ) ( ${_ASSERT_NULL_} 'x' >"${stdoutF}" 2>"${stderrF}" )
if ! wasAssertGenerated; then grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
fail '_ASSERT_NULL_ failed to produce an ASSERT message' rtrn=$?
showTestOutput assertTrue '_ASSERT_NULL_ failure' ${rtrn}
fi [ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
( ${_ASSERT_NULL_} '"some msg"' 'x' >"${stdoutF}" 2>"${stderrF}" ) ( ${_ASSERT_NULL_} '"some msg"' 'x' >"${stdoutF}" 2>"${stderrF}" )
if ! wasAssertGenerated; then grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
fail '_ASSERT_NULL_ (with a message) failed to produce an ASSERT message' rtrn=$?
showTestOutput assertTrue '_ASSERT_NULL_ w/ msg failure' ${rtrn}
fi [ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
} }
testNotNull() { testNotNull()
isLinenoWorking || startSkipping {
# start skipping if LINENO not available
[ -z "${LINENO:-}" ] && startSkipping
( ${_ASSERT_NOT_NULL_} '' >"${stdoutF}" 2>"${stderrF}" ) ( ${_ASSERT_NOT_NULL_} '' >"${stdoutF}" 2>"${stderrF}" )
if ! wasAssertGenerated; then grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
fail '_ASSERT_NOT_NULL_ failed to produce an ASSERT message' rtrn=$?
showTestOutput assertTrue '_ASSERT_NOT_NULL_ failure' ${rtrn}
fi [ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
( ${_ASSERT_NOT_NULL_} '"some msg"' '""' >"${stdoutF}" 2>"${stderrF}" ) ( ${_ASSERT_NOT_NULL_} '"some msg"' '""' >"${stdoutF}" 2>"${stderrF}" )
if ! wasAssertGenerated; then grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
fail '_ASSERT_NOT_NULL_ (with a message) failed to produce an ASSERT message' rtrn=$?
showTestOutput assertTrue '_ASSERT_NOT_NULL_ w/ msg failure' ${rtrn}
fi [ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stdoutF}" "${stderrF}" >&2
} }
testAssertTrue() { testAssertTrue() {
isLinenoWorking || startSkipping # Start skipping if LINENO not available.
[ -z "${LINENO:-}" ] && startSkipping
( ${_ASSERT_TRUE_} "${SHUNIT_FALSE}" >"${stdoutF}" 2>"${stderrF}" ) ( ${_ASSERT_TRUE_} "${SHUNIT_FALSE}" >"${stdoutF}" 2>"${stderrF}" )
if ! wasAssertGenerated; then grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
fail '_ASSERT_TRUE_ failed to produce an ASSERT message' rtrn=$?
showTestOutput assertTrue '_ASSERT_TRUE_ failure' ${rtrn}
fi [ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
( ${_ASSERT_TRUE_} '"some msg"' "${SHUNIT_FALSE}" >"${stdoutF}" 2>"${stderrF}" ) ( ${_ASSERT_TRUE_} '"some msg"' "${SHUNIT_FALSE}" >"${stdoutF}" 2>"${stderrF}" )
if ! wasAssertGenerated; then grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
fail '_ASSERT_TRUE_ (with a message) failed to produce an ASSERT message' rtrn=$?
showTestOutput assertTrue '_ASSERT_TRUE_ w/ msg failure' ${rtrn}
fi [ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
} }
testAssertFalse() { testAssertFalse() {
isLinenoWorking || startSkipping # Start skipping if LINENO not available.
[ -z "${LINENO:-}" ] && startSkipping
( ${_ASSERT_FALSE_} "${SHUNIT_TRUE}" >"${stdoutF}" 2>"${stderrF}" ) ( ${_ASSERT_FALSE_} "${SHUNIT_TRUE}" >"${stdoutF}" 2>"${stderrF}" )
if ! wasAssertGenerated; then grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
fail '_ASSERT_FALSE_ failed to produce an ASSERT message' rtrn=$?
showTestOutput assertTrue '_ASSERT_FALSE_ failure' ${rtrn}
fi [ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
( ${_ASSERT_FALSE_} '"some msg"' "${SHUNIT_TRUE}" >"${stdoutF}" 2>"${stderrF}" ) ( ${_ASSERT_FALSE_} '"some msg"' "${SHUNIT_TRUE}" >"${stdoutF}" 2>"${stderrF}" )
if ! wasAssertGenerated; then grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
fail '_ASSERT_FALSE_ (with a message) failed to produce an ASSERT message' rtrn=$?
showTestOutput assertTrue '_ASSERT_FALSE_ w/ msg failure' ${rtrn}
fi [ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
} }
testFail() { testFail() {
isLinenoWorking || startSkipping # Start skipping if LINENO not available.
[ -z "${LINENO:-}" ] && startSkipping
( ${_FAIL_} >"${stdoutF}" 2>"${stderrF}" ) ( ${_FAIL_} >"${stdoutF}" 2>"${stderrF}" )
if ! wasAssertGenerated; then grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
fail '_FAIL_ failed to produce an ASSERT message' rtrn=$?
showTestOutput assertTrue '_FAIL_ failure' ${rtrn}
fi [ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
( ${_FAIL_} '"some msg"' >"${stdoutF}" 2>"${stderrF}" ) ( ${_FAIL_} '"some msg"' >"${stdoutF}" 2>"${stderrF}" )
if ! wasAssertGenerated; then grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
fail '_FAIL_ (with a message) failed to produce an ASSERT message' rtrn=$?
showTestOutput assertTrue '_FAIL_ w/ msg failure' ${rtrn}
fi [ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
} }
testFailNotEquals() { testFailNotEquals()
isLinenoWorking || startSkipping {
# start skipping if LINENO not available
[ -z "${LINENO:-}" ] && startSkipping
( ${_FAIL_NOT_EQUALS_} 'x' 'y' >"${stdoutF}" 2>"${stderrF}" ) ( ${_FAIL_NOT_EQUALS_} 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
if ! wasAssertGenerated; then grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
fail '_FAIL_NOT_EQUALS_ failed to produce an ASSERT message' rtrn=$?
showTestOutput assertTrue '_FAIL_NOT_EQUALS_ failure' ${rtrn}
fi [ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
( ${_FAIL_NOT_EQUALS_} '"some msg"' 'x' 'y' >"${stdoutF}" 2>"${stderrF}" ) ( ${_FAIL_NOT_EQUALS_} '"some msg"' 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
if ! wasAssertGenerated; then grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
fail '_FAIL_NOT_EQUALS_ (with a message) failed to produce an ASSERT message' rtrn=$?
showTestOutput assertTrue '_FAIL_NOT_EQUALS_ w/ msg failure' ${rtrn}
fi [ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
} }
testFailSame() { testFailSame() {
isLinenoWorking || startSkipping # Start skipping if LINENO not available.
[ -z "${LINENO:-}" ] && startSkipping
( ${_FAIL_SAME_} 'x' 'x' >"${stdoutF}" 2>"${stderrF}" ) ( ${_FAIL_SAME_} 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
if ! wasAssertGenerated; then grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
fail '_FAIL_SAME_ failed to produce an ASSERT message' rtrn=$?
showTestOutput assertTrue '_FAIL_SAME_ failure' ${rtrn}
fi [ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
( ${_FAIL_SAME_} '"some msg"' 'x' 'x' >"${stdoutF}" 2>"${stderrF}" ) ( ${_FAIL_SAME_} '"some msg"' 'x' 'x' >"${stdoutF}" 2>"${stderrF}" )
if ! wasAssertGenerated; then grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
fail '_FAIL_SAME_ (with a message) failed to produce an ASSERT message' rtrn=$?
showTestOutput assertTrue '_FAIL_SAME_ w/ msg failure' ${rtrn}
fi [ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
} }
testFailNotSame() { testFailNotSame() {
isLinenoWorking || startSkipping # Start skipping if LINENO not available.
[ -z "${LINENO:-}" ] && startSkipping
( ${_FAIL_NOT_SAME_} 'x' 'y' >"${stdoutF}" 2>"${stderrF}" ) ( ${_FAIL_NOT_SAME_} 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
if ! wasAssertGenerated; then grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
fail '_FAIL_NOT_SAME_ failed to produce an ASSERT message' rtrn=$?
showTestOutput assertTrue '_FAIL_NOT_SAME_ failure' ${rtrn}
fi [ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
( ${_FAIL_NOT_SAME_} '"some msg"' 'x' 'y' >"${stdoutF}" 2>"${stderrF}" ) ( ${_FAIL_NOT_SAME_} '"some msg"' 'x' 'y' >"${stdoutF}" 2>"${stderrF}" )
if ! wasAssertGenerated; then grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null
fail '_FAIL_NOT_SAME_ (with a message) failed to produce an ASSERT message' rtrn=$?
showTestOutput assertTrue '_FAIL_NOT_SAME_ w/ msg failure' ${rtrn}
fi [ "${rtrn}" -ne "${SHUNIT_TRUE}" ] && cat "${stderrF}" >&2
} }
oneTimeSetUp() { oneTimeSetUp() {
th_oneTimeSetUp th_oneTimeSetUp
if ! isLinenoWorking; then
# shellcheck disable=SC2016
th_warn '${LINENO} is not working for this shell. Tests will be skipped.'
fi
} }
# isLinenoWorking returns true if the `$LINENO` shell variable works properly.
isLinenoWorking() {
# shellcheck disable=SC2016
ln='eval echo "${LINENO:-}"'
case ${ln} in
[0-9]*) return "${SHUNIT_TRUE}" ;;
-[0-9]*) return "${SHUNIT_FALSE}" ;; # The dash shell produces negative values.
esac
return "${SHUNIT_FALSE}"
}
# showTestOutput for the most recently run test.
showTestOutput() { th_showOutput "${SHUNIT_FALSE}" "${stdoutF}" "${stderrF}"; }
# wasAssertGenerated returns true if an ASSERT was generated to STDOUT.
wasAssertGenerated() { grep '^ASSERT:\[[0-9]*\] *' "${stdoutF}" >/dev/null; }
# Disable output coloring as it breaks the tests. # Disable output coloring as it breaks the tests.
SHUNIT_COLOR='none'; export SHUNIT_COLOR SHUNIT_COLOR='none'; export SHUNIT_COLOR

View File

@ -3,17 +3,19 @@
# #
# shUnit2 unit tests of miscellaneous things # shUnit2 unit tests of miscellaneous things
# #
# Copyright 2008-2021 Kate Ward. All Rights Reserved. # Copyright 2008-2018 Kate Ward. All Rights Reserved.
# Released under the Apache 2.0 license. # Released under the Apache 2.0 license.
# http://www.apache.org/licenses/LICENSE-2.0
# #
# Author: kate.ward@forestent.com (Kate Ward) # Author: kate.ward@forestent.com (Kate Ward)
# https://github.com/kward/shunit2 # https://github.com/kward/shunit2
# #
# Allow usage of legacy backticked `...` notation instead of $(...). ### ShellCheck http://www.shellcheck.net/
# $() are not fully portable (POSIX != portable).
# shellcheck disable=SC2006 # shellcheck disable=SC2006
# Disable source following. # Disable source following.
# shellcheck disable=SC1090,SC1091 # shellcheck disable=SC1090,SC1091
# Not wanting to escape single quotes.
# shellcheck disable=SC1003
# These variables will be overridden by the test helpers. # These variables will be overridden by the test helpers.
stdoutF="${TMPDIR:-/tmp}/STDOUT" stdoutF="${TMPDIR:-/tmp}/STDOUT"
@ -39,18 +41,14 @@ testUnboundVariable() {
#SHUNIT_COLOR='none' #SHUNIT_COLOR='none'
#. ${TH_SHUNIT} #. ${TH_SHUNIT}
EOF EOF
if ( exec "${SHELL:-sh}" "${unittestF}" >"${stdoutF}" 2>"${stderrF}" ); then ( exec "${SHUNIT_SHELL:-sh}" "${unittestF}" >"${stdoutF}" 2>"${stderrF}" )
fail 'expected a non-zero exit value' assertFalse 'expected a non-zero exit value' $?
fi grep '^ASSERT:Unknown failure' "${stdoutF}" >/dev/null
if ! grep '^ASSERT:unknown failure' "${stdoutF}" >/dev/null; then assertTrue 'assert message was not generated' $?
fail 'assert message was not generated' grep '^Ran [0-9]* test' "${stdoutF}" >/dev/null
fi assertTrue 'test count message was not generated' $?
if ! grep '^Ran [0-9]* test' "${stdoutF}" >/dev/null; then grep '^FAILED' "${stdoutF}" >/dev/null
fail 'test count message was not generated' assertTrue 'failure message was not generated' $?
fi
if ! grep '^FAILED' "${stdoutF}" >/dev/null; then
fail 'failure message was not generated'
fi
} }
# assertEquals repeats message argument. # assertEquals repeats message argument.
@ -59,8 +57,7 @@ testIssue7() {
# Disable coloring so 'ASSERT:' lines can be matched correctly. # Disable coloring so 'ASSERT:' lines can be matched correctly.
_shunit_configureColor 'none' _shunit_configureColor 'none'
# Ignoring errors with `|| :` as we only care about the message in this test. ( assertEquals 'Some message.' 1 2 >"${stdoutF}" 2>"${stderrF}" )
( assertEquals 'Some message.' 1 2 >"${stdoutF}" 2>"${stderrF}" ) || :
diff "${stdoutF}" - >/dev/null <<EOF diff "${stdoutF}" - >/dev/null <<EOF
ASSERT:Some message. expected:<1> but was:<2> ASSERT:Some message. expected:<1> but was:<2>
EOF EOF
@ -80,37 +77,19 @@ testIssue29() {
#SHUNIT_TEST_PREFIX='--- ' #SHUNIT_TEST_PREFIX='--- '
#. ${TH_SHUNIT} #. ${TH_SHUNIT}
EOF EOF
( exec "${SHELL:-sh}" "${unittestF}" >"${stdoutF}" 2>"${stderrF}" ) ( exec "${SHUNIT_SHELL:-sh}" "${unittestF}" >"${stdoutF}" 2>"${stderrF}" )
grep '^--- test_assert' "${stdoutF}" >/dev/null grep '^--- test_assert' "${stdoutF}" >/dev/null
rtrn=$? rtrn=$?
assertEquals "${SHUNIT_TRUE}" "${rtrn}" assertEquals "${SHUNIT_TRUE}" "${rtrn}"
[ "${rtrn}" -eq "${SHUNIT_TRUE}" ] || cat "${stdoutF}" >&2 [ "${rtrn}" -eq "${SHUNIT_TRUE}" ] || cat "${stdoutF}" >&2
} }
# Test that certain external commands sometimes "stubbed" by users are escaped.
testIssue54() {
for c in mkdir rm cat chmod sed; do
if grep "^[^#]*${c} " "${TH_SHUNIT}" | grep -qv "command ${c}"; then
fail "external call to ${c} not protected somewhere"
fi
done
# shellcheck disable=2016
if grep '^[^#]*[^ ] *\[' "${TH_SHUNIT}" | grep -qv '${__SHUNIT_BUILTIN} \['; then
fail 'call to [ not protected somewhere'
fi
# shellcheck disable=2016
if grep '^[^#]* *\.' "${TH_SHUNIT}" | grep -qv '${__SHUNIT_BUILTIN} \.'; then
fail 'call to . not protected somewhere'
fi
}
# shUnit2 should not exit with 0 when it has syntax errors. # shUnit2 should not exit with 0 when it has syntax errors.
# https://github.com/kward/shunit2/issues/69 # https://github.com/kward/shunit2/issues/69
testIssue69() { testIssue69() {
unittestF="${SHUNIT_TMPDIR}/unittest" unittestF="${SHUNIT_TMPDIR}/unittest"
# Note: assertNull not tested as zero arguments == null, which is valid. for t in Equals NotEquals Null NotNull Same NotSame True False; do
for t in Equals NotEquals NotNull Same NotSame True False; do
assert="assert${t}" assert="assert${t}"
sed 's/^#//' >"${unittestF}" <<EOF sed 's/^#//' >"${unittestF}" <<EOF
## Asserts with invalid argument counts should be counted as failures. ## Asserts with invalid argument counts should be counted as failures.
@ -118,8 +97,7 @@ testIssue69() {
#SHUNIT_COLOR='none' #SHUNIT_COLOR='none'
#. ${TH_SHUNIT} #. ${TH_SHUNIT}
EOF EOF
# Ignoring errors with `|| :` as we only care about `FAILED` in the output. ( exec "${SHUNIT_SHELL:-sh}" "${unittestF}" >"${stdoutF}" 2>"${stderrF}" )
( exec "${SHELL:-sh}" "${unittestF}" >"${stdoutF}" 2>"${stderrF}" ) || :
grep '^FAILED' "${stdoutF}" >/dev/null grep '^FAILED' "${stdoutF}" >/dev/null
assertTrue "failure message for ${assert} was not generated" $? assertTrue "failure message for ${assert} was not generated" $?
done done
@ -136,8 +114,7 @@ testIssue77() {
#SHUNIT_COLOR='none' #SHUNIT_COLOR='none'
#. ${TH_SHUNIT} #. ${TH_SHUNIT}
EOF EOF
# Ignoring errors with `|| :` as we only care about `FAILED` in the output. ( exec "${SHUNIT_SHELL:-sh}" "${unittestF}" >"${stdoutF}" 2>"${stderrF}" )
( exec "${SHELL:-sh}" "${unittestF}" ) >"${stdoutF}" 2>"${stderrF}" || :
grep '^FAILED' "${stdoutF}" >/dev/null grep '^FAILED' "${stdoutF}" >/dev/null
assertTrue "failure of ${func}() did not end test" $? assertTrue "failure of ${func}() did not end test" $?
done done
@ -158,24 +135,9 @@ testIssue84() {
#SHUNIT_TEST_PREFIX='--- ' #SHUNIT_TEST_PREFIX='--- '
#. ${TH_SHUNIT} #. ${TH_SHUNIT}
EOF EOF
# Ignoring errors with `|| :` as we only care about `FAILED` in the output. ( exec "${SHUNIT_SHELL:-sh}" "${unittestF}" >"${stdoutF}" 2>"${stderrF}" )
( exec "${SHELL:-sh}" "${unittestF}" >"${stdoutF}" 2>"${stderrF}" ) || : grep '^FAILED' "${stdoutF}" >/dev/null
if ! grep '^FAILED' "${stdoutF}" >/dev/null; then assertTrue "failure message for ${assert} was not generated" $?
fail 'failure message was not generated'
fi
}
# Demonstrate that asserts are no longer executed in subshells.
# https://github.com/kward/shunit2/issues/123
#
# NOTE: this test only works if the `${BASH_SUBSHELL}` variable is present.
testIssue123() {
if [ -z "${BASH_SUBSHELL:-}" ]; then
# shellcheck disable=SC2016
startSkipping 'The ${BASH_SUBSHELL} variable is unavailable in this shell.'
fi
# shellcheck disable=SC2016
assertTrue 'not in subshell' '[[ ${BASH_SUBSHELL} -eq 0 ]]'
} }
testPrepForSourcing() { testPrepForSourcing() {
@ -184,6 +146,55 @@ testPrepForSourcing() {
assertEquals './abc' "`_shunit_prepForSourcing 'abc'`" assertEquals './abc' "`_shunit_prepForSourcing 'abc'`"
} }
testEscapeCharInStr() {
while read -r desc char str want; do
got=`_shunit_escapeCharInStr "${char}" "${str}"`
assertEquals "${desc}" "${want}" "${got}"
done <<'EOF'
backslash \ '' ''
backslash_pre \ \def \\def
backslash_mid \ abc\def abc\\def
backslash_post \ abc\ abc\\
quote " '' ''
quote_pre " "def \"def
quote_mid " abc"def abc\"def
quote_post " abc" abc\"
string $ '' ''
string_pre $ $def \$def
string_mid $ abc$def abc\$def
string_post $ abc$ abc\$
EOF
# TODO(20170924:kward) fix or remove.
# actual=`_shunit_escapeCharInStr "'" ''`
# assertEquals '' "${actual}"
# assertEquals "abc\\'" `_shunit_escapeCharInStr "'" "abc'"`
# assertEquals "abc\\'def" `_shunit_escapeCharInStr "'" "abc'def"`
# assertEquals "\\'def" `_shunit_escapeCharInStr "'" "'def"`
# # Must put the backtick in a variable so the shell doesn't misinterpret it
# # while inside a backticked sequence (e.g. `echo '`'` would fail).
# backtick='`'
# actual=`_shunit_escapeCharInStr ${backtick} ''`
# assertEquals '' "${actual}"
# assertEquals '\`abc' \
# `_shunit_escapeCharInStr "${backtick}" ${backtick}'abc'`
# assertEquals 'abc\`' \
# `_shunit_escapeCharInStr "${backtick}" 'abc'${backtick}`
# assertEquals 'abc\`def' \
# `_shunit_escapeCharInStr "${backtick}" 'abc'${backtick}'def'`
}
testEscapeCharInStr_specialChars() {
# Make sure our forward slash doesn't upset sed.
assertEquals '/' "`_shunit_escapeCharInStr '\' '/'`"
# Some shells escape these differently.
# TODO(20170924:kward) fix or remove.
#assertEquals '\\a' `_shunit_escapeCharInStr '\' '\a'`
#assertEquals '\\b' `_shunit_escapeCharInStr '\' '\b'`
}
# Test the various ways of declaring functions. # Test the various ways of declaring functions.
# #
# Prefixing (then stripping) with comment symbol so these functions aren't # Prefixing (then stripping) with comment symbol so these functions aren't
@ -212,61 +223,23 @@ testExtractTestFunctions() {
#func_with_test_vars() { #func_with_test_vars() {
# testVariable=1234 # testVariable=1234
#} #}
## Function with keyword but no parenthesis
#function test6 { echo '6'; }
## Function with keyword but no parenthesis, multi-line
#function test7 {
# echo '7';
#}
## Function with no parenthesis, '{' on next line
#function test8
#{
# echo '8'
#}
## Function with hyphenated name
#test-9() {
# echo '9';
#}
## Function without parenthesis or keyword
#test_foobar { echo 'hello world'; }
## Function with multiple function keywords
#function function test_test_test() { echo 'lorem'; }
EOF EOF
actual=`_shunit_extractTestFunctions "${f}"` actual=`_shunit_extractTestFunctions "${f}"`
assertEquals 'testABC test_def testG3 test4 test5 test6 test7 test8 test-9' "${actual}" assertEquals 'testABC test_def testG3 test4 test5' "${actual}"
} }
testColors() { # Test that certain external commands sometimes "stubbed" by users
while read -r cmd colors desc; do # are escaped. See Issue #54.
SHUNIT_CMD_TPUT=${cmd} testProtectedCommands() {
want=${colors} got=`_shunit_colors` for c in mkdir rm cat chmod; do
assertEquals "${desc}: incorrect number of colors;" \ grep "^[^#]*${c} " "${TH_SHUNIT}" | grep -qv "command ${c}"
"${got}" "${want}" assertFalse "external call to ${c} not protected somewhere" $?
done <<'EOF' done
missing_tput 16 missing tput command grep '^[^#]*[^ ] *\[' "${TH_SHUNIT}" | grep -qv 'command \['
mock_tput 256 mock tput command assertFalse "call to [ ... ] not protected somewhere" $?
EOF grep '^[^#]* *\.' "${TH_SHUNIT}" | grep -qv 'command \.'
} assertFalse "call to . not protected somewhere" $?
testColorsWitoutTERM() {
SHUNIT_CMD_TPUT='mock_tput'
got=`TERM='' _shunit_colors`
want=16
assertEquals "${got}" "${want}"
}
mock_tput() {
if [ -z "${TERM}" ]; then
# shellcheck disable=SC2016
echo 'tput: No value for $TERM and no -T specified'
return 2
fi
if [ "$1" = 'colors' ]; then
echo 256
return 0
fi
return 1
} }
setUp() { setUp() {
@ -276,9 +249,6 @@ setUp() {
# Reconfigure coloring as some tests override default behavior. # Reconfigure coloring as some tests override default behavior.
_shunit_configureColor "${SHUNIT_COLOR_DEFAULT}" _shunit_configureColor "${SHUNIT_COLOR_DEFAULT}"
# shellcheck disable=SC2034,SC2153
SHUNIT_CMD_TPUT=${__SHUNIT_CMD_TPUT}
} }
oneTimeSetUp() { oneTimeSetUp() {

View File

@ -1,70 +0,0 @@
#! /bin/sh
# vim:et:ft=sh:sts=2:sw=2
#
# shUnit2 unit tests for `shopt` support.
#
# Copyright 2008-2021 Kate Ward. All Rights Reserved.
# Released under the Apache 2.0 license.
# http://www.apache.org/licenses/LICENSE-2.0
#
# Author: kate.ward@forestent.com (Kate Ward)
# https://github.com/kward/shunit2
#
# Disable source following.
# shellcheck disable=SC1090,SC1091
# Load test helpers.
. ./shunit2_test_helpers
# Call shopt from a variable so it can be mocked if it doesn't work.
SHOPT_CMD='shopt'
testNullglob() {
isShoptWorking || startSkipping
nullglob=$(${SHOPT_CMD} nullglob |cut -f2)
# Test without nullglob.
${SHOPT_CMD} -u nullglob
assertEquals 'test without nullglob' 0 0
# Test with nullglob.
${SHOPT_CMD} -s nullglob
assertEquals 'test with nullglob' 1 1
# Reset nullglob.
if [ "${nullglob}" = "on" ]; then
${SHOPT_CMD} -s nullglob
else
${SHOPT_CMD} -u nullglob
fi
unset nullglob
}
oneTimeSetUp() {
th_oneTimeSetUp
if ! isShoptWorking; then
SHOPT_CMD='mock_shopt'
fi
}
# isShoptWorking returns true if the `shopt` shell command is available.
# NOTE: `shopt` is not defined as part of the POSIX standard.
isShoptWorking() {
# shellcheck disable=SC2039,SC3044
( shopt >/dev/null 2>&1 );
}
mock_shopt() {
if [ $# -eq 0 ]; then
echo "nullglob off"
fi
return
}
# Load and run shUnit2.
# shellcheck disable=SC2034
[ -n "${ZSH_VERSION:-}" ] && SHUNIT_PARENT="$0"
. "${TH_SHUNIT}"

View File

@ -3,9 +3,8 @@
# #
# shUnit2 unit test for standalone operation. # shUnit2 unit test for standalone operation.
# #
# Copyright 2008-2021 Kate Ward. All Rights Reserved. # Copyright 2010-2017 Kate Ward. All Rights Reserved.
# Released under the Apache 2.0 license. # Released under the Apache 2.0 license.
# http://www.apache.org/licenses/LICENSE-2.0
# #
# Author: kate.ward@forestent.com (Kate Ward) # Author: kate.ward@forestent.com (Kate Ward)
# https://github.com/kward/shunit2 # https://github.com/kward/shunit2
@ -14,10 +13,13 @@
# the name of a unit test script, works. When run, this script determines if it # the name of a unit test script, works. When run, this script determines if it
# is running as a standalone program, and calls main() if it is. # is running as a standalone program, and calls main() if it is.
# #
### ShellCheck http://www.shellcheck.net/
# $() are not fully portable (POSIX != portable).
# shellcheck disable=SC2006
# Disable source following. # Disable source following.
# shellcheck disable=SC1090,SC1091 # shellcheck disable=SC1090,SC1091
ARGV0=$(basename "$0") ARGV0="`basename "$0"`"
# Load test helpers. # Load test helpers.
. ./shunit2_test_helpers . ./shunit2_test_helpers
@ -30,7 +32,7 @@ main() {
${TH_SHUNIT} "${ARGV0}" ${TH_SHUNIT} "${ARGV0}"
} }
# Run main() if are running as a standalone script. # Are we running as a standalone?
if [ "${ARGV0}" = 'shunit2_standalone_test.sh' ]; then if [ "${ARGV0}" = 'shunit2_test_standalone.sh' ]; then
main "$@" if [ $# -gt 0 ]; then main "$@"; else main; fi
fi fi

View File

@ -2,27 +2,25 @@
# #
# shUnit2 unit test common functions # shUnit2 unit test common functions
# #
# Copyright 2008-2021 Kate Ward. All Rights Reserved. # Copyright 2008 Kate Ward. All Rights Reserved.
# Released under the Apache 2.0 license. # Released under the Apache 2.0 license.
# http://www.apache.org/licenses/LICENSE-2.0
# #
# Author: kate.ward@forestent.com (Kate Ward) # Author: kate.ward@forestent.com (Kate Ward)
# https://github.com/kward/shunit2 # https://github.com/kward/shunit2
# #
### ShellCheck (http://www.shellcheck.net/) ### ShellCheck (http://www.shellcheck.net/)
# Commands are purposely escaped so they can be mocked outside shUnit2.
# shellcheck disable=SC1001,SC1012
# expr may be antiquated, but it is the only solution in some cases. # expr may be antiquated, but it is the only solution in some cases.
# shellcheck disable=SC2003 # shellcheck disable=SC2003
# $() are not fully portable (POSIX != portable). # $() are not fully portable (POSIX != portable).
# shellcheck disable=SC2006 # shellcheck disable=SC2006
# Exit immediately if a simple command exits with a non-zero status.
set -e
# Treat unset variables as an error when performing parameter expansion. # Treat unset variables as an error when performing parameter expansion.
set -u set -u
# Set shwordsplit for zsh. # Set shwordsplit for zsh.
[ -n "${ZSH_VERSION:-}" ] && setopt shwordsplit \[ -n "${ZSH_VERSION:-}" ] && setopt shwordsplit
# #
# Constants. # Constants.
@ -35,11 +33,11 @@ TH_SHUNIT=${SHUNIT_INC:-./shunit2}; export TH_SHUNIT
# non-empty value to enable debug output, or TRACE to enable trace # non-empty value to enable debug output, or TRACE to enable trace
# output. # output.
TRACE=${TRACE:+'th_trace '} TRACE=${TRACE:+'th_trace '}
[ -n "${TRACE}" ] && DEBUG=1 \[ -n "${TRACE}" ] && DEBUG=1
[ -z "${TRACE}" ] && TRACE=':' \[ -z "${TRACE}" ] && TRACE=':'
DEBUG=${DEBUG:+'th_debug '} DEBUG=${DEBUG:+'th_debug '}
[ -z "${DEBUG}" ] && DEBUG=':' \[ -z "${DEBUG}" ] && DEBUG=':'
# #
# Variables. # Variables.
@ -52,12 +50,12 @@ th_RANDOM=0
# #
# Logging functions. # Logging functions.
th_trace() { echo "test:TRACE $*" >&2; } th_trace() { echo "${MY_NAME}:TRACE $*" >&2; }
th_debug() { echo "test:DEBUG $*" >&2; } th_debug() { echo "${MY_NAME}:DEBUG $*" >&2; }
th_info() { echo "test:INFO $*" >&2; } th_info() { echo "${MY_NAME}:INFO $*" >&2; }
th_warn() { echo "test:WARN $*" >&2; } th_warn() { echo "${MY_NAME}:WARN $*" >&2; }
th_error() { echo "test:ERROR $*" >&2; } th_error() { echo "${MY_NAME}:ERROR $*" >&2; }
th_fatal() { echo "test:FATAL $*" >&2; } th_fatal() { echo "${MY_NAME}:FATAL $*" >&2; }
# Output subtest name. # Output subtest name.
th_subtest() { echo " $*" >&2; } th_subtest() { echo " $*" >&2; }
@ -75,20 +73,20 @@ th_oneTimeSetUp() {
th_generateRandom() { th_generateRandom() {
tfgr_random=${th_RANDOM} tfgr_random=${th_RANDOM}
while [ "${tfgr_random}" = "${th_RANDOM}" ]; do while \[ "${tfgr_random}" = "${th_RANDOM}" ]; do
# shellcheck disable=SC2039 # shellcheck disable=SC2039
if [ -n "${RANDOM:-}" ]; then if \[ -n "${RANDOM:-}" ]; then
# $RANDOM works # $RANDOM works
# shellcheck disable=SC2039 # shellcheck disable=SC2039
tfgr_random=${RANDOM}${RANDOM}${RANDOM}$$ tfgr_random=${RANDOM}${RANDOM}${RANDOM}$$
elif [ -r '/dev/urandom' ]; then elif \[ -r '/dev/urandom' ]; then
tfgr_random=`od -vAn -N4 -tu4 </dev/urandom |sed 's/^[^0-9]*//'` tfgr_random=`od -vAn -N4 -tu4 </dev/urandom |sed 's/^[^0-9]*//'`
else else
tfgr_date=`date '+%H%M%S'` tfgr_date=`date '+%H%M%S'`
tfgr_random=`expr "${tfgr_date}" \* $$` tfgr_random=`expr "${tfgr_date}" \* $$`
unset tfgr_date unset tfgr_date
fi fi
[ "${tfgr_random}" = "${th_RANDOM}" ] && sleep 1 \[ "${tfgr_random}" = "${th_RANDOM}" ] && sleep 1
done done
th_RANDOM=${tfgr_random} th_RANDOM=${tfgr_random}
@ -129,13 +127,12 @@ th_assertTrueWithNoOutput() {
th_stdout_=$3 th_stdout_=$3
th_stderr_=$4 th_stderr_=$4
assertEquals "${th_test_}: expected return value of true" "${SHUNIT_TRUE}" "${th_rtrn_}" assertTrue "${th_test_}; expected return value of zero" "${th_rtrn_}"
assertFalse "${th_test_}: expected no output to STDOUT" "[ -s '${th_stdout_}' ]" \[ "${th_rtrn_}" -ne "${SHUNIT_TRUE}" ] && \cat "${th_stderr_}"
assertFalse "${th_test_}: expected no output to STDERR" "[ -s '${th_stderr_}' ]" assertFalse "${th_test_}; expected no output to STDOUT" \
# shellcheck disable=SC2166 "[ -s '${th_stdout_}' ]"
if [ -s "${th_stdout_}" -o -s "${th_stderr_}" ]; then assertFalse "${th_test_}; expected no output to STDERR" \
_th_showOutput "${SHUNIT_FALSE}" "${th_stdout_}" "${th_stderr_}" "[ -s '${th_stderr_}' ]"
fi
unset th_test_ th_rtrn_ th_stdout_ th_stderr_ unset th_test_ th_rtrn_ th_stdout_ th_stderr_
} }
@ -155,13 +152,13 @@ th_assertFalseWithOutput()
th_stdout_=$3 th_stdout_=$3
th_stderr_=$4 th_stderr_=$4
assertNotEquals "${th_test_}: expected non-true return value" "${SHUNIT_TRUE}" "${th_rtrn_}" assertFalse "${th_test_}; expected non-zero return value" "${th_rtrn_}"
assertTrue "${th_test_}: expected output to STDOUT" "[ -s '${th_stdout_}' ]" assertTrue "${th_test_}; expected output to STDOUT" \
assertFalse "${th_test_}: expected no output to STDERR" "[ -s '${th_stderr_}' ]" "[ -s '${th_stdout_}' ]"
# shellcheck disable=SC2166 assertFalse "${th_test_}; expected no output to STDERR" \
if ! [ -s "${th_stdout_}" -a ! -s "${th_stderr_}" ]; then "[ -s '${th_stderr_}' ]"
\[ -s "${th_stdout_}" -a ! -s "${th_stderr_}" ] || \
_th_showOutput "${SHUNIT_FALSE}" "${th_stdout_}" "${th_stderr_}" _th_showOutput "${SHUNIT_FALSE}" "${th_stdout_}" "${th_stderr_}"
fi
unset th_test_ th_rtrn_ th_stdout_ th_stderr_ unset th_test_ th_rtrn_ th_stdout_ th_stderr_
} }
@ -180,13 +177,13 @@ th_assertFalseWithError() {
th_stdout_=$3 th_stdout_=$3
th_stderr_=$4 th_stderr_=$4
assertFalse "${th_test_}: expected non-zero return value" "${th_rtrn_}" assertFalse "${th_test_}; expected non-zero return value" "${th_rtrn_}"
assertFalse "${th_test_}: expected no output to STDOUT" "[ -s '${th_stdout_}' ]" assertFalse "${th_test_}; expected no output to STDOUT" \
assertTrue "${th_test_}: expected output to STDERR" "[ -s '${th_stderr_}' ]" "[ -s '${th_stdout_}' ]"
# shellcheck disable=SC2166 assertTrue "${th_test_}; expected output to STDERR" \
if ! [ ! -s "${th_stdout_}" -a -s "${th_stderr_}" ]; then "[ -s '${th_stderr_}' ]"
\[ ! -s "${th_stdout_}" -a -s "${th_stderr_}" ] || \
_th_showOutput "${SHUNIT_FALSE}" "${th_stdout_}" "${th_stderr_}" _th_showOutput "${SHUNIT_FALSE}" "${th_stdout_}" "${th_stderr_}"
fi
unset th_test_ th_rtrn_ th_stdout_ th_stderr_ unset th_test_ th_rtrn_ th_stdout_ th_stderr_
} }
@ -196,8 +193,8 @@ th_assertFalseWithError() {
# they are either written to disk, or recognized as an error the file is empty. # they are either written to disk, or recognized as an error the file is empty.
th_clearReturn() { cp /dev/null "${returnF}"; } th_clearReturn() { cp /dev/null "${returnF}"; }
th_queryReturn() { th_queryReturn() {
if [ -s "${returnF}" ]; then if \[ -s "${returnF}" ]; then
th_return=`cat "${returnF}"` th_return=`\cat "${returnF}"`
else else
th_return=${SHUNIT_ERROR} th_return=${SHUNIT_ERROR}
fi fi
@ -207,26 +204,22 @@ th_queryReturn() {
# Providing external and internal calls to the showOutput helper function. # Providing external and internal calls to the showOutput helper function.
th_showOutput() { _th_showOutput "$@"; } th_showOutput() { _th_showOutput "$@"; }
_th_showOutput() { _th_showOutput() {
if isSkipping; then _th_return_=$1
return _th_stdout_=$2
fi _th_stderr_=$3
_th_return_="${1:-${returnF}}" isSkipping
_th_stdout_="${2:-${stdoutF}}" if \[ $? -eq "${SHUNIT_FALSE}" -a "${_th_return_}" != "${SHUNIT_TRUE}" ]; then
_th_stderr_="${3:-${stderrF}}" if \[ -n "${_th_stdout_}" -a -s "${_th_stdout_}" ]; then
if [ "${_th_return_}" != "${SHUNIT_TRUE}" ]; then
# shellcheck disable=SC2166
if [ -n "${_th_stdout_}" -a -s "${_th_stdout_}" ]; then
echo '>>> STDOUT' >&2 echo '>>> STDOUT' >&2
cat "${_th_stdout_}" >&2 \cat "${_th_stdout_}" >&2
echo '<<< STDOUT' >&2
fi fi
# shellcheck disable=SC2166 if \[ -n "${_th_stderr_}" -a -s "${_th_stderr_}" ]; then
if [ -n "${_th_stderr_}" -a -s "${_th_stderr_}" ]; then
echo '>>> STDERR' >&2 echo '>>> STDERR' >&2
cat "${_th_stderr_}" >&2 \cat "${_th_stderr_}" >&2
echo '<<< STDERR' >&2 fi
if \[ -n "${_th_stdout_}" -o -n "${_th_stderr_}" ]; then
echo '<<< end output' >&2
fi fi
fi fi

View File

@ -3,7 +3,7 @@
# #
# Unit test suite runner. # Unit test suite runner.
# #
# Copyright 2008-2020 Kate Ward. All Rights Reserved. # Copyright 2008-2017 Kate Ward. All Rights Reserved.
# Released under the Apache 2.0 license. # Released under the Apache 2.0 license.
# #
# Author: kate.ward@forestent.com (Kate Ward) # Author: kate.ward@forestent.com (Kate Ward)
@ -12,20 +12,6 @@
# This script runs all the unit tests that can be found, and generates a nice # This script runs all the unit tests that can be found, and generates a nice
# report of the tests. # report of the tests.
# #
### Sample usage:
#
# Run all tests for all shells.
# $ ./test_runner
#
# Run all tests for single shell.
# $ ./test_runner -s /bin/bash
#
# Run single test for all shells.
# $ ./test_runner -t shunit_asserts_test.sh
#
# Run single test for single shell.
# $ ./test_runner -s /bin/bash -t shunit_asserts_test.sh
#
### ShellCheck (http://www.shellcheck.net/) ### ShellCheck (http://www.shellcheck.net/)
# Disable source following. # Disable source following.
# shellcheck disable=SC1090,SC1091 # shellcheck disable=SC1090,SC1091
@ -39,10 +25,8 @@
RUNNER_LOADED=0 RUNNER_LOADED=0
RUNNER_ARGV0=`basename "$0"` RUNNER_ARGV0=`basename "$0"`
RUNNER_SHELLS='/bin/sh ash /bin/bash /bin/dash /bin/ksh /bin/mksh /bin/zsh' RUNNER_SHELLS='/bin/sh ash /bin/bash /bin/dash /bin/ksh /bin/pdksh /bin/zsh'
RUNNER_TEST_SUFFIX='_test.sh' RUNNER_TEST_SUFFIX='_test.sh'
true; RUNNER_TRUE=$?
false; RUNNER_FALSE=$?
runner_warn() { echo "runner:WARN $*" >&2; } runner_warn() { echo "runner:WARN $*" >&2; }
runner_error() { echo "runner:ERROR $*" >&2; } runner_error() { echo "runner:ERROR $*" >&2; }
@ -52,7 +36,7 @@ runner_usage() {
echo "usage: ${RUNNER_ARGV0} [-e key=val ...] [-s shell(s)] [-t test(s)]" echo "usage: ${RUNNER_ARGV0} [-e key=val ...] [-s shell(s)] [-t test(s)]"
} }
_runner_tests() { echo ./*${RUNNER_TEST_SUFFIX} |sed 's#\./##g'; } _runner_tests() { echo ./*${RUNNER_TEST_SUFFIX} |sed 's#./##g'; }
_runner_testName() { _runner_testName() {
# shellcheck disable=SC1117 # shellcheck disable=SC1117
_runner_testName_=`expr "${1:-}" : "\(.*\)${RUNNER_TEST_SUFFIX}"` _runner_testName_=`expr "${1:-}" : "\(.*\)${RUNNER_TEST_SUFFIX}"`
@ -130,7 +114,6 @@ for key in ${env}; do
done done
# Run tests. # Run tests.
runner_passing_=${RUNNER_TRUE}
for shell in ${shells}; do for shell in ${shells}; do
echo echo
@ -144,20 +127,20 @@ EOF
# Check for existence of shell. # Check for existence of shell.
shell_bin=${shell} shell_bin=${shell}
shell_name='' shell_name=''
shell_present=${RUNNER_FALSE} shell_present=${FALSE}
case ${shell} in case ${shell} in
ash) ash)
shell_bin=`command -v busybox` shell_bin=`which busybox |grep -v '^no busybox'`
[ $? -eq "${RUNNER_TRUE}" ] && shell_present="${RUNNER_TRUE}" [ $? -eq "${TRUE}" -a -n "${shell_bin}" ] && shell_present="${TRUE}"
shell_bin="${shell_bin:+${shell_bin} }ash" shell_bin="${shell_bin} ash"
shell_name=${shell} shell_name=${shell}
;; ;;
*) *)
[ -x "${shell_bin}" ] && shell_present="${RUNNER_TRUE}" [ -x "${shell_bin}" ] && shell_present="${TRUE}"
shell_name=`basename "${shell}"` shell_name=`basename "${shell}"`
;; ;;
esac esac
if [ "${shell_present}" -eq "${RUNNER_FALSE}" ]; then if [ "${shell_present}" -eq "${FALSE}" ]; then
runner_warn "unable to run tests with the ${shell_name} shell" runner_warn "unable to run tests with the ${shell_name} shell"
continue continue
fi fi
@ -174,18 +157,9 @@ EOF
# ${shell_bin} needs word splitting. # ${shell_bin} needs word splitting.
# shellcheck disable=SC2086 # shellcheck disable=SC2086
( exec ${shell_bin} "./${t}" 2>&1; ) ( exec ${shell_bin} "./${t}" 2>&1; )
shell_passing=$?
if [ "${shell_passing}" -ne "${RUNNER_TRUE}" ]; then
runner_warn "${shell_bin} not passing"
fi
test "${runner_passing_}" -eq ${RUNNER_TRUE} -a ${shell_passing} -eq ${RUNNER_TRUE}
runner_passing_=$?
done done
done done
return ${runner_passing_}
} }
# Execute main() if this is run in standalone mode (i.e. not from a unit test). # Execute main() if this is run in standalone mode (i.e. not from a unit test).
if [ -z "${SHUNIT_VERSION}" ]; then [ -z "${SHUNIT_VERSION}" ] && main "$@"
main "$@"
fi

View File

@ -1,20 +1,19 @@
#!/usr/bin/env bash #!/usr/bin/env bash
###### osync - Rsync based two way sync engine with fault tolerance ###### osync - Rsync based two way sync engine with fault tolerance
###### (C) 2013-2020 by Orsiris de Jong (www.netpower.fr) ###### (C) 2013-2017 by Orsiris de Jong (www.netpower.fr)
###### osync-target-helper v1.2.2+ config file rev 2017061901
[GENERAL] ## ---------- GENERAL OPTIONS
CONFIG_FILE_REVISION=1.3.0
## Sync job identification ## Sync job identification
INSTANCE_ID="target_test" INSTANCE_ID="sync_test"
## Directories to synchronize. ## Directories to synchronize.
## Initiator is the system osync runs on. The initiator directory must be a local path. ## Initiator is the system main osync runs on. The initiator directory must be a remote path for osync target helper to contact.
INITIATOR_SYNC_DIR="/home/git/osync/dir1" INITIATOR_SYNC_DIR="ssh://backupuser@yourhost.old:22//home/git/osync/dir1"
#INITIATOR_SYNC_DIR="ssh://backupuser@yourhost.old:22//home/git/osync/dir1"
## Target is the system osync synchronizes to (can be the same system as the initiator in case of local sync tasks). The target directory can be a local or remote path. ## Target is the system osync synchronizes to. The target directory must be a local.
TARGET_SYNC_DIR="/home/git/osync/dir2" TARGET_SYNC_DIR="/home/git/osync/dir2"
## If the target system is remote, you can specify a RSA key (please use full path). If not defined, the default ~/.ssh/id_rsa will be used. See documentation for further information. ## If the target system is remote, you can specify a RSA key (please use full path). If not defined, the default ~/.ssh/id_rsa will be used. See documentation for further information.
@ -30,27 +29,24 @@ _REMOTE_TOKEN=SomeAlphaNumericToken9
LOGFILE="" LOGFILE=""
## If enabled, synchronization on remote system will be processed as superuser. See documentation for /etc/sudoers file configuration. ## If enabled, synchronization on remote system will be processed as superuser. See documentation for /etc/sudoers file configuration.
SUDO_EXEC=false SUDO_EXEC=no
## ---------- REMOTE SYNC OPTIONS ## ---------- REMOTE SYNC OPTIONS
## ssh compression should be used unless your remote connection is good enough (LAN) ## ssh compression should be used unless your remote connection is good enough (LAN)
SSH_COMPRESSION=true SSH_COMPRESSION=yes
## Ignore ssh known hosts. DANGER WILL ROBINSON DANGER ! This can lead to security issues. Only enable this if you know what you're doing. ## Ignore ssh known hosts. DANGER WILL ROBINSON DANGER ! This can lead to security issues. Only enable this if you know what you're doing.
SSH_IGNORE_KNOWN_HOSTS=false SSH_IGNORE_KNOWN_HOSTS=no
## Check for connectivity to remote host before launching remote sync task. Be sure the hosts responds to ping. Failing to ping will stop sync. ## Check for connectivity to remote host before launching remote sync task. Be sure the hosts responds to ping. Failing to ping will stop sync.
REMOTE_HOST_PING=false REMOTE_HOST_PING=no
## Check for internet access by pinging one or more 3rd party hosts before remote sync task. Leave empty if you don't want this check to be be performed. Failing to ping will stop sync. ## Check for internet access by pinging one or more 3rd party hosts before remote sync task. Leave empty if you don't want this check to be be performed. Failing to ping will stop sync.
## If you use this function, you should set more than one 3rd party host, and be sure you can ping them. ## If you use this function, you should set more than one 3rd party host, and be sure you can ping them.
## Be aware some DNS like opendns redirect false hostnames. Also, this adds an extra execution time of a bit less than a minute. ## Be aware some DNS like opendns redirect false hostnames. Also, this adds an extra execution time of a bit less than a minute.
REMOTE_3RD_PARTY_HOSTS="www.kernel.org www.google.com" REMOTE_3RD_PARTY_HOSTS="www.kernel.org www.google.com"
## Log a message every KEEP_LOGGING seconds just to know the task is still alive
KEEP_LOGGING=1801
## Minimum time (in seconds) in file monitor /daemon mode between modification detection and sync task in order to let copy operations finish. ## Minimum time (in seconds) in file monitor /daemon mode between modification detection and sync task in order to let copy operations finish.
MIN_WAIT=60 MIN_WAIT=60
@ -58,7 +54,7 @@ MIN_WAIT=60
## Use 0 to wait indefinitely. ## Use 0 to wait indefinitely.
MAX_WAIT=7200 MAX_WAIT=7200
[ALERT_OPTIONS] ## ---------- ALERT OPTIONS
## List of alert mails separated by spaces ## List of alert mails separated by spaces
## Most Unix systems (including Win10 bash) have mail support out of the box ## Most Unix systems (including Win10 bash) have mail support out of the box
@ -81,22 +77,3 @@ SMTP_PORT=25
SMTP_ENCRYPTION=none SMTP_ENCRYPTION=none
SMTP_USER= SMTP_USER=
SMTP_PASSWORD= SMTP_PASSWORD=
[EXECUTION_HOOKS]
## Commands can will be run before and / or after sync process (remote execution will only happen if REMOTE_OPERATION is set).
LOCAL_RUN_BEFORE_CMD=""
LOCAL_RUN_AFTER_CMD=""
REMOTE_RUN_BEFORE_CMD=""
REMOTE_RUN_AFTER_CMD=""
## Max execution time of commands before they get force killed. Leave 0 if you don't wan't this to happen. Time is specified in seconds.
MAX_EXEC_TIME_PER_CMD_BEFORE=0
MAX_EXEC_TIME_PER_CMD_AFTER=0
## Stops osync execution if one of the above commands fail
STOP_ON_CMD_ERROR=true
## Run local and remote after sync commands even on failure
RUN_AFTER_CMD_ON_ERROR=false

File diff suppressed because it is too large Load Diff

View File

@ -1,9 +1,9 @@
#!/usr/bin/env bash #!/usr/bin/env bash
SUBPROGRAM=osync SUBPROGRAM=osync
PROGRAM="$SUBPROGRAM-batch" # Batch program to run osync / obackup instances sequentially and rerun failed ones PROGRAM="$SUBPROGRAM-batch" # Batch program to run osync / obackup instances sequentially and rerun failed ones
AUTHOR="(L) 2013-2020 by Orsiris de Jong" AUTHOR="(L) 2013-2017 by Orsiris de Jong"
CONTACT="http://www.netpower.fr - ozy@netpower.fr" CONTACT="http://www.netpower.fr - ozy@netpower.fr"
PROGRAM_BUILD=2020031502 PROGRAM_BUILD=2016120401
## Runs an osync /obackup instance for every conf file found ## Runs an osync /obackup instance for every conf file found
## If an instance fails, run it again if time permits ## If an instance fails, run it again if time permits
@ -26,217 +26,36 @@ else
LOG_FILE=./$SUBPROGRAM-batch.log LOG_FILE=./$SUBPROGRAM-batch.log
fi fi
## Default directory where to store temporary run files
if [ -w /tmp ]; then
RUN_DIR=/tmp
elif [ -w /var/tmp ]; then
RUN_DIR=/var/tmp
else
RUN_DIR=.
fi
# No need to edit under this line ############################################################## # No need to edit under this line ##############################################################
#### RemoteLogger SUBSET #### function _logger {
local value="${1}" # What to log
# Array to string converter, see http://stackoverflow.com/questions/1527049/bash-join-elements-of-an-array echo -e "$value" >> "$LOG_FILE"
# usage: joinString separaratorChar Array
function joinString {
local IFS="$1"; shift; echo "$*";
} }
# Sub function of Logger
function _Logger {
local logValue="${1}" # Log to file
local stdValue="${2}" # Log to screeen
local toStdErr="${3:-false}" # Log to stderr instead of stdout
if [ "$logValue" != "" ]; then
echo -e "$logValue" >> "$LOG_FILE"
# Build current log file for alerts if we have a sufficient environment
if [ "$_LOGGER_WRITE_PARTIAL_LOGS" == true ] && [ "$RUN_DIR/$PROGRAM" != "/" ]; then
echo -e "$logValue" >> "$RUN_DIR/$PROGRAM._Logger.$SCRIPT_PID.$TSTAMP"
fi
fi
if [ "$stdValue" != "" ] && [ "$_LOGGER_SILENT" != true ]; then
if [ $toStdErr == true ]; then
# Force stderr color in subshell
(>&2 echo -e "$stdValue")
else
echo -e "$stdValue"
fi
fi
}
# Remote logger similar to below Logger, without log to file and alert flags
function RemoteLogger {
local value="${1}" # Sentence to log (in double quotes)
local level="${2}" # Log level
local retval="${3:-undef}" # optional return value of command
local prefix
if [ "$_LOGGER_PREFIX" == "time" ]; then
prefix="RTIME: $SECONDS - "
elif [ "$_LOGGER_PREFIX" == "date" ]; then
prefix="R $(date) - "
else
prefix=""
fi
if [ "$level" == "CRITICAL" ]; then
_Logger "" "$prefix\e[1;33;41m$value\e[0m" true
if [ "$_DEBUG" == true ]; then
_Logger -e "" "[$retval] in [$(joinString , ${FUNCNAME[@]})] SP=$SCRIPT_PID P=$$" true
fi
return
elif [ "$level" == "ERROR" ]; then
_Logger "" "$prefix\e[31m$value\e[0m" true
if [ "$_DEBUG" == true ]; then
_Logger -e "" "[$retval] in [$(joinString , ${FUNCNAME[@]})] SP=$SCRIPT_PID P=$$" true
fi
return
elif [ "$level" == "WARN" ]; then
_Logger "" "$prefix\e[33m$value\e[0m" true
if [ "$_DEBUG" == true ]; then
_Logger -e "" "[$retval] in [$(joinString , ${FUNCNAME[@]})] SP=$SCRIPT_PID P=$$" true
fi
return
elif [ "$level" == "NOTICE" ]; then
if [ "$_LOGGER_ERR_ONLY" != true ]; then
_Logger "" "$prefix$value"
fi
return
elif [ "$level" == "VERBOSE" ]; then
if [ "$_LOGGER_VERBOSE" == true ]; then
_Logger "" "$prefix$value"
fi
return
elif [ "$level" == "ALWAYS" ]; then
_Logger "" "$prefix$value"
return
elif [ "$level" == "DEBUG" ]; then
if [ "$_DEBUG" == true ]; then
_Logger "" "$prefix$value"
return
fi
else
_Logger "" "\e[41mLogger function called without proper loglevel [$level].\e[0m" true
_Logger "" "Value was: $prefix$value" true
fi
}
#### RemoteLogger SUBSET END ####
# General log function with log levels:
# Environment variables
# _LOGGER_SILENT: Disables any output to stdout & stderr
# _LOGGER_ERR_ONLY: Disables any output to stdout except for ALWAYS loglevel
# _LOGGER_VERBOSE: Allows VERBOSE loglevel messages to be sent to stdout
# Loglevels
# Except for VERBOSE, all loglevels are ALWAYS sent to log file
# CRITICAL, ERROR, WARN sent to stderr, color depending on level, level also logged
# NOTICE sent to stdout
# VERBOSE sent to stdout if _LOGGER_VERBOSE=true
# ALWAYS is sent to stdout unless _LOGGER_SILENT=true
# DEBUG & PARANOIA_DEBUG are only sent to stdout if _DEBUG=true
function Logger { function Logger {
local value="${1}" # Sentence to log (in double quotes) local value="${1}" # What to log
local level="${2}" # Log level local level="${2}" # Log level: DEBUG, NOTICE, WARN, ERROR, CRITIAL
local retval="${3:-undef}" # optional return value of command
local prefix prefix="$(date) - "
if [ "$_LOGGER_PREFIX" == "time" ]; then
prefix="TIME: $SECONDS - "
elif [ "$_LOGGER_PREFIX" == "date" ]; then
prefix="$(date '+%Y-%m-%d %H:%M:%S') - "
else
prefix=""
fi
## Obfuscate _REMOTE_TOKEN in logs (for ssh_filter usage only in osync and obackup)
value="${value/env _REMOTE_TOKEN=$_REMOTE_TOKEN/env _REMOTE_TOKEN=__o_O__}"
value="${value/env _REMOTE_TOKEN=\$_REMOTE_TOKEN/env _REMOTE_TOKEN=__o_O__}"
if [ "$level" == "CRITICAL" ]; then if [ "$level" == "CRITICAL" ]; then
_Logger "$prefix($level):$value" "$prefix\e[1;33;41m$value\e[0m" true _logger "$prefix\e[41m$value\e[0m"
ERROR_ALERT=true
# ERROR_ALERT / WARN_ALERT is not set in main when Logger is called from a subprocess. We need to create these flag files for ERROR_ALERT / WARN_ALERT to be picked up by Alert
echo -e "[$retval] in [$(joinString , ${FUNCNAME[@]})] SP=$SCRIPT_PID P=$$\n$prefix($level):$value" >> "$RUN_DIR/$PROGRAM.ERROR_ALERT.$SCRIPT_PID.$TSTAMP"
return
elif [ "$level" == "ERROR" ]; then elif [ "$level" == "ERROR" ]; then
_Logger "$prefix($level):$value" "$prefix\e[91m$value\e[0m" true _logger "$prefix\e[91m$value\e[0m"
ERROR_ALERT=true
echo -e "[$retval] in [$(joinString , ${FUNCNAME[@]})] SP=$SCRIPT_PID P=$$\n$prefix($level):$value" >> "$RUN_DIR/$PROGRAM.ERROR_ALERT.$SCRIPT_PID.$TSTAMP"
return
elif [ "$level" == "WARN" ]; then elif [ "$level" == "WARN" ]; then
_Logger "$prefix($level):$value" "$prefix\e[33m$value\e[0m" true _logger "$prefix\e[93m$value\e[0m"
WARN_ALERT=true
echo -e "[$retval] in [$(joinString , ${FUNCNAME[@]})] SP=$SCRIPT_PID P=$$\n$prefix($level):$value" >> "$RUN_DIR/$PROGRAM.WARN_ALERT.$SCRIPT_PID.$TSTAMP"
return
elif [ "$level" == "NOTICE" ]; then elif [ "$level" == "NOTICE" ]; then
if [ "$_LOGGER_ERR_ONLY" != true ]; then _logger "$prefix$value"
_Logger "$prefix$value" "$prefix$value"
fi
return
elif [ "$level" == "VERBOSE" ]; then
if [ "$_LOGGER_VERBOSE" == true ]; then
_Logger "$prefix($level):$value" "$prefix$value"
fi
return
elif [ "$level" == "ALWAYS" ]; then
_Logger "$prefix$value" "$prefix$value"
return
elif [ "$level" == "DEBUG" ]; then elif [ "$level" == "DEBUG" ]; then
if [ "$_DEBUG" == true ]; then if [ "$DEBUG" == "yes" ]; then
_Logger "$prefix$value" "$prefix$value" _logger "$prefix$value"
return
fi fi
else else
_Logger "\e[41mLogger function called without proper loglevel [$level].\e[0m" "\e[41mLogger function called without proper loglevel [$level].\e[0m" true _logger "\e[41mLogger function called without proper loglevel.\e[0m"
_Logger "Value was: $prefix$value" "Value was: $prefix$value" true _logger "$prefix$value"
fi fi
} }
function CleanUp {
# Exit controlmaster before the socket gets deleted
if [ "$SSH_CONTROLMASTER" == true ] && [ "$SSH_CMD" != "" ]; then
$SSH_CMD -O exit
fi
if [ "$_DEBUG" != true ]; then
# Removing optional remote $RUN_DIR that goes into local $RUN_DIR
if [ -d "$RUN_DIR/$PROGRAM.remote.$SCRIPT_PID.$TSTAMP" ]; then
rm -rf "$RUN_DIR/$PROGRAM.remote.$SCRIPT_PID.$TSTAMP"
fi
# Removing all temporary run files
rm -f "$RUN_DIR/$PROGRAM."*".$SCRIPT_PID.$TSTAMP"
# Fix for sed -i requiring backup extension for BSD & Mac (see all sed -i statements)
rm -f "$RUN_DIR/$PROGRAM."*".$SCRIPT_PID.$TSTAMP.tmp"
fi
}
function GenericTrapQuit {
local exitcode=0
# Get ERROR / WARN alert flags from subprocesses that call Logger
if [ -f "$RUN_DIR/$PROGRAM.WARN_ALERT.$SCRIPT_PID.$TSTAMP" ]; then
WARN_ALERT=true
exitcode=2
fi
if [ -f "$RUN_DIR/$PROGRAM.ERROR_ALERT.$SCRIPT_PID.$TSTAMP" ]; then
ERROR_ALERT=true
exitcode=1
fi
CleanUp
exit $exitcode
}
function CheckEnvironment { function CheckEnvironment {
## osync / obackup executable full path can be set here if it cannot be found on the system ## osync / obackup executable full path can be set here if it cannot be found on the system
@ -326,8 +145,6 @@ function Usage {
exit 128 exit 128
} }
trap GenericTrapQuit TERM EXIT HUP QUIT
opts="" opts=""
for i in "$@" for i in "$@"
do do

View File

@ -1,139 +0,0 @@
#!/usr/bin/env bash
#
# osync-srv Two way directory sync daemon
#
# chkconfig: - 90 99
# description: monitors a local directory and syncs to a local or remote \
# directory on file changes
# processname: /usr/local/bin/osync.sh
# config: /etc/osync/*.conf
# pidfile: /var/run/osync
### BEGIN INIT INFO
# Provides: osync-target-helper-srv
# Required-Start: $local_fs $time
# Required-Stop: $local_fs $time
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: osync-target-helpder daemon
# Description: Two way directory sync daemon
### END INIT INFO
prog=osync
progexec=osync.sh
progpath=/usr/local/bin
confdir=/etc/osync
pidfile=/var/run/$prog-target-helper
SCRIPT_BUILD=2018100101
if [ ! -f $progpath/$progexec ] && [ ! -f $progexec ]; then
echo "Cannot find $prog executable in $progpath nor in local path."
exit 1
fi
if [ ! -w $(dirname $pidfile) ]; then
pidfile=./$prog
fi
start() {
if ! ls "$confdir/"*.conf > /dev/null 2>&1; then
echo "Cannot find any configuration files in $confdir."
exit 1
fi
errno=0
for cfgfile in "$confdir/"*.conf
do
if [ -f $progpath/$progexec ]; then
$progpath/$progexec $cfgfile --on-changes-target --errors-only > /dev/null 2>&1 &
else
echo "Cannot find $prog executable in $progpath"
exit 1
fi
pid=$!
retval=$?
if [ $? == 0 ]; then
echo $pid > "$pidfile-$(basename $cfgfile)"
echo "$prog successfully started for configuration file $cfgfile"
else
echo "Cannot start $prog for configuration file $cfgfile"
errno=1
fi
done
exit $errno
}
stop() {
if [ ! -f $pidfile-* ]; then
echo "No running $prog instances found."
exit 1
fi
for pfile in $pidfile-*
do
if ps -p$(cat $pfile) > /dev/null 2>&1
then
kill -TERM $(cat $pfile)
if [ $? == 0 ]; then
rm -f $pfile
echo "$prog instance $(basename $pfile) stopped."
else
echo "Cannot stop $prog instance $(basename $pfile)"
fi
else
rm -f $pfile
echo "$prog instance $pfile (pid $(cat $pfile)) is dead but pidfile exists."
fi
done
}
status() {
if [ ! -f $pidfile-* ]; then
echo "Cannot find any running $prog instance."
exit 1
fi
errno=0
for pfile in $pidfile-*
do
if ps -p$(cat $pfile) > /dev/null 2>&1
then
echo "$prog instance $(basename $pfile) is running (pid $(cat $pfile))"
else
echo "$prog instance $pfile (pid $(cat $pfile)) is dead but pidfile exists."
errno=1
fi
done
exit $errno
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
stop
start
;;
status)
status
;;
condrestart|try-restart)
status || exit 0
restart
;;
*)
echo "Usage: $0 {start|stop|restart|status}"
;;
esac
exit 0

View File

@ -1,13 +0,0 @@
[Unit]
Description=osync - Target helper service
After=time-sync.target local-fs.target network-online.target
Requires=time-sync.target local-fs.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=/usr/local/bin/osync.sh /etc/osync/%i --on-changes-target --errors-only
SuccessExitStatus=0 2
[Install]
WantedBy=multi-user.target

View File

@ -1,11 +0,0 @@
[Unit]
Description=A robust two way (bidirectional) file sync script based on rsync with fault tolerance
After=time-sync.target local-fs.target network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=/usr/local/bin/osync.sh /etc/osync/%i --on-changes-target --silent
SuccessExitStatus=0 2
[Install]
WantedBy=multi-user.target

2858
osync.sh

File diff suppressed because it is too large Load Diff

2722
osync_target_helper.sh Executable file

File diff suppressed because it is too large Load Diff

View File

@ -9,13 +9,13 @@
##### Any other command will return a "syntax error" ##### Any other command will return a "syntax error"
##### For details, see ssh_filter.log ##### For details, see ssh_filter.log
# BUILD=2017020802 SCRIPT_BUILD=2017020802
## Allow sudo ## Allow sudo
SUDO_EXEC=true SUDO_EXEC=yes
## Log all valid commands too ## Log all valid commands too
_DEBUG=false _DEBUG=no
## Set remote token in authorized_keys ## Set remote token in authorized_keys
if [ "$1" != "" ]; then if [ "$1" != "" ]; then
@ -25,12 +25,12 @@ fi
LOG_FILE="${HOME}/.ssh/ssh_filter.log" LOG_FILE="${HOME}/.ssh/ssh_filter.log"
function Log { function Log {
DATE="$(date)" DATE=$(date)
echo "$DATE - $1" >> "$LOG_FILE" echo "$DATE - $1" >> "$LOG_FILE"
} }
function Go { function Go {
if [ "$_DEBUG" == true ]; then if [ "$_DEBUG" == "yes" ]; then
Log "Executing [$SSH_ORIGINAL_COMMAND]." Log "Executing [$SSH_ORIGINAL_COMMAND]."
fi fi
eval "$SSH_ORIGINAL_COMMAND" eval "$SSH_ORIGINAL_COMMAND"
@ -38,7 +38,7 @@ function Go {
case "${SSH_ORIGINAL_COMMAND}" in case "${SSH_ORIGINAL_COMMAND}" in
*"env _REMOTE_TOKEN=$_REMOTE_TOKEN"*) *"env _REMOTE_TOKEN=$_REMOTE_TOKEN"*)
if [ "$SUDO_EXEC" != true ] && [[ $SSH_ORIGINAL_COMMAND == *"sudo "* ]]; then if [ "$SUDO_EXEC" != "yes" ] && [[ $SSH_ORIGINAL_COMMAND == *"sudo "* ]]; then
Log "Command [$SSH_ORIGINAL_COMMAND] contains sudo which is not allowed." Log "Command [$SSH_ORIGINAL_COMMAND] contains sudo which is not allowed."
echo "Syntax error unexpected end of file" echo "Syntax error unexpected end of file"
exit 1 exit 1

View File

@ -1,8 +1,10 @@
###### osync - Rsync based two way sync engine with fault tolerance #!/usr/bin/env bash
###### (C) 2013-2023 by Orsiris de Jong (www.netpower.fr)
[GENERAL] ###### osync - Rsync based two way sync engine with fault tolerance
CONFIG_FILE_REVISION=1.3.0 ###### (C) 2013-2017 by Orsiris de Jong (www.netpower.fr)
###### osync v1.1x / v1.2x config file rev 2017060501
## ---------- GENERAL OPTIONS
## Sync job identification ## Sync job identification
INSTANCE_ID="sync_test" INSTANCE_ID="sync_test"
@ -21,14 +23,11 @@ SSH_RSA_PRIVATE_KEY="/home/backupuser/.ssh/id_rsa"
## Alternatively, you may specify an SSH password file (less secure). Needs sshpass utility installed. ## Alternatively, you may specify an SSH password file (less secure). Needs sshpass utility installed.
SSH_PASSWORD_FILE="" SSH_PASSWORD_FILE=""
## use the KRB5 credential cache to access SSH or rsync
#KRB5=true
## When using ssh filter, you must specify a remote token matching the one setup in authorized_keys ## When using ssh filter, you must specify a remote token matching the one setup in authorized_keys
_REMOTE_TOKEN=SomeAlphaNumericToken9 _REMOTE_TOKEN=SomeAlphaNumericToken9
## Create sync directories if they do not exist (true/false) ## Create sync directories if they do not exist
CREATE_DIRS=true CREATE_DIRS=no
## Log file location. Leaving this empty will create a logfile at /var/log/osync_version_SYNC_ID.log (or current directory if /var/log doesn't exist) ## Log file location. Leaving this empty will create a logfile at /var/log/osync_version_SYNC_ID.log (or current directory if /var/log doesn't exist)
LOGFILE="" LOGFILE=""
@ -40,7 +39,7 @@ MINIMUM_SPACE=10240
BANDWIDTH=0 BANDWIDTH=0
## If enabled, synchronization on remote system will be processed as superuser. See documentation for /etc/sudoers file configuration. ## If enabled, synchronization on remote system will be processed as superuser. See documentation for /etc/sudoers file configuration.
SUDO_EXEC=false SUDO_EXEC=no
## Paranoia option. Don't change this unless you read the documentation. ## Paranoia option. Don't change this unless you read the documentation.
RSYNC_EXECUTABLE=rsync RSYNC_EXECUTABLE=rsync
## Remote rsync executable path. Leave this empty in most cases ## Remote rsync executable path. Leave this empty in most cases
@ -65,71 +64,52 @@ RSYNC_EXCLUDE_FROM=""
## List elements separator char. You may set an alternative separator char for your directories lists above. ## List elements separator char. You may set an alternative separator char for your directories lists above.
PATH_SEPARATOR_CHAR=";" PATH_SEPARATOR_CHAR=";"
## By default, osync stores its state into the replica_path/.osync_workdir/state ## ---------- REMOTE SYNC OPTIONS
## This behavior can be changed for initiator or slave by overriding the following with an absolute path to a statedir, ex /opt/osync_state/initiator
## If osync runs locally, initiator and target state dirs **must** be different
INITIATOR_CUSTOM_STATE_DIR=""
TARGET_CUSTOM_STATE_DIR=""
[REMOTE_OPTIONS] ## ssh compression should be used unless your remote connection is good enough (LAN)
SSH_COMPRESSION=yes
## ssh compression should be used on WAN links, unless your remote connection is good enough (LAN), in which case it would slow down things
SSH_COMPRESSION=false
## Optional ssh options. Example to lower CPU usage on ssh compression, one can specify '-T -c arcfour -o Compression=no -x'
## -T = turn off pseudo-tty, -c arcfour = weakest but fasted ssh encryption (destination must accept "Ciphers arcfour" in sshd_config), -x turns off X11 forwarding
## arcfour isn't accepted on most newer systems, you may then prefer any AES encryption if processor has aes-ni hardware acceleration
## If the system does not provide hardware assisted acceleration, chacha20-poly1305@openssh.com is a good cipher to select
## See: https://wiki.csnu.org/index.php/SSH_ciphers_speed_comparison
## -o Compression=no is already handled by SSH_COMPRESSION option
## Uncomment the following line to use those optimizations, on secured links only
#SSH_OPTIONAL_ARGS="-T -c aes128-ctr -x"
#SSH_OPTIONAL_ARGS="-T -c chacha20-poly1305@openssh.com -x"
## Ignore ssh known hosts. DANGER WILL ROBINSON DANGER ! This can lead to security issues. Only enable this if you know what you're doing. ## Ignore ssh known hosts. DANGER WILL ROBINSON DANGER ! This can lead to security issues. Only enable this if you know what you're doing.
SSH_IGNORE_KNOWN_HOSTS=false SSH_IGNORE_KNOWN_HOSTS=no
## Use a single TCP connection for all SSH calls. Will make remote sync faster, but may work less good on lossy links.
SSH_CONTROLMASTER=false
## Check for connectivity to remote host before launching remote sync task. Be sure the hosts responds to ping. Failing to ping will stop sync. ## Check for connectivity to remote host before launching remote sync task. Be sure the hosts responds to ping. Failing to ping will stop sync.
REMOTE_HOST_PING=false REMOTE_HOST_PING=no
## Check for internet access by pinging one or more 3rd party hosts before remote sync task. Leave empty if you don't want this check to be be performed. Failing to ping will stop sync. ## Check for internet access by pinging one or more 3rd party hosts before remote sync task. Leave empty if you don't want this check to be be performed. Failing to ping will stop sync.
## If you use this function, you should set more than one 3rd party host, and be sure you can ping them. ## If you use this function, you should set more than one 3rd party host, and be sure you can ping them.
## Be aware some DNS like opendns redirect false hostnames. Also, this adds an extra execution time of a bit less than a minute. ## Be aware some DNS like opendns redirect false hostnames. Also, this adds an extra execution time of a bit less than a minute.
REMOTE_3RD_PARTY_HOSTS="www.kernel.org www.google.com" REMOTE_3RD_PARTY_HOSTS="www.kernel.org www.google.com"
[MISC_OPTIONS] ## ---------- MISC OPTIONS
## Optional arguments passed to rsync executable. The following are already managed by the program and shoul never be passed here ## Optional arguments passed to rsync executable. The following are already managed by the program and shoul never be passed here
## -r -l -p -t -g -o -D -E - u- i- n --executability -A -X -L -K -H -8 --zz -skip-compress -checksum -bwlimit -partial -partial-dir -no-whole-file -whole-file -backup -backup-dir -suffix ## -r -l -p -t -g -o -D -E - u- i- n --executability -A -X -L -K -H -8 -zz skip-compress checksum bwlimit partial partial-dir no-whole-file whole-file backup backup-dir suffix
## --exclude --exclude-from --include --include-from --list-only --stats ## --exclude --exclude-from --include --include-from --list-only --stats
## When dealing with different filesystems for sync, or using SMB mountpoints, try adding --modify-window=2 --omit-dir-times as optional arguments. ## When dealing with different filesystems for sync, or using SMB mountpoints, try adding --modify-window=2 --omit-dir-times as optional arguments.
RSYNC_OPTIONAL_ARGS="" RSYNC_OPTIONAL_ARGS=""
## Preserve basic linux permissions ## Preserve basic linux permissions
PRESERVE_PERMISSIONS=true PRESERVE_PERMISSIONS=yes
PRESERVE_OWNER=true PRESERVE_OWNER=yes
PRESERVE_GROUP=true PRESERVE_GROUP=yes
## On MACOS X, does not work and will be ignored ## On MACOS X, does not work and will be ignored
PRESERVE_EXECUTABILITY=true PRESERVE_EXECUTABILITY=yes
## Preserve ACLS. Make sure source and target FS can handle ACL. Disabled on Mac OSX. ## Preserve ACLS. Make sure source and target FS can handle ACL. Disabled on Mac OSX.
PRESERVE_ACL=false PRESERVE_ACL=no
## Preserve Xattr. Make sure source and target FS can manage identical XATTRS. Disabled on Mac OSX. Apparently, prior to rsync v3.1.2 there are some performance caveats with transferring XATTRS. ## Preserve Xattr. Make sure source and target FS can manage identical XATTRS. Disabled on Mac OSX. Apparently, prior to rsync v3.1.2 there are some performance caveats with transferring XATTRS.
PRESERVE_XATTR=false PRESERVE_XATTR=no
## Transforms symlinks into referent files/dirs. Be careful as symlinks without referrent will break sync as if standard files could not be copied. ## Transforms symlinks into referent files/dirs. Be careful as symlinks without referrent will break sync as if standard files could not be copied.
COPY_SYMLINKS=false COPY_SYMLINKS=no
## Treat symlinked dirs as dirs. CAUTION: This also follows symlinks outside of the replica root. ## Treat symlinked dirs as dirs. CAUTION: This also follows symlinks outside of the replica root.
KEEP_DIRLINKS=false KEEP_DIRLINKS=no
## Preserve hard links. Make sure source and target FS can manage hard links or you will lose them. ## Preserve hard links. Make sure source and target FS can manage hard links or you will lose them.
PRESERVE_HARDLINKS=false PRESERVE_HARDLINKS=no
## Do a full checksum on all files that have identical sizes, they are checksummed to see if they actually are identical. This can take a long time. ## Do a full checksum on all files that have identical sizes, they are checksummed to see if they actually are identical. This can take a long time.
CHECKSUM=false CHECKSUM=no
## Let RSYNC compress file transfers. Do not use this if both initator and target replicas are on local system. Also, do not use this if you already enabled SSH compression. ## Let RSYNC compress file transfers. Do not use this if both initator and target replicas are on local system. Also, do not use this if you already enabled SSH compression.
RSYNC_COMPRESS=true RSYNC_COMPRESS=yes
## Maximum execution time (in seconds) for sync process. Set these values zero will disable max execution times. ## Maximum execution time (in seconds) for sync process. Set these values zero will disable max execution times.
## Soft exec time only generates a warning. Hard exec time will generate a warning and stop sync process. ## Soft exec time only generates a warning. Hard exec time will generate a warning and stop sync process.
@ -146,57 +126,52 @@ MIN_WAIT=60
## Use 0 to wait indefinitely. ## Use 0 to wait indefinitely.
MAX_WAIT=7200 MAX_WAIT=7200
[BACKUP_DELETE_OPTIONS] ## ---------- BACKUP AND DELETION OPTIONS
## Log a list of conflictual files (EXPERIMENTAL) ## Log a list of conflictual files
LOG_CONFLICTS=false LOG_CONFLICTS=yes
## Send an email when conflictual files are found (implies LOG_CONFLICTS) ## Send an email when conflictual files are found (implies LOG_CONFLICTS)
ALERT_CONFLICTS=false ALERT_CONFLICTS=no
## Enabling this option will keep a backup of a file on the target replica if it gets updated from the source replica. Backups will be made to .osync_workdir/backups ## Enabling this option will keep a backup of a file on the target replica if it gets updated from the source replica. Backups will be made to .osync_workdir/backups
CONFLICT_BACKUP=true CONFLICT_BACKUP=yes
## Keep multiple backup versions of the same file. Warning, This can be very space consuming. ## Keep multiple backup versions of the same file. Warning, This can be very space consuming.
CONFLICT_BACKUP_MULTIPLE=false CONFLICT_BACKUP_MULTIPLE=no
## Osync will clean backup files after a given number of days. Setting this to 0 will disable cleaning and keep backups forever. Warning: This can be very space consuming. ## Osync will clean backup files after a given number of days. Setting this to 0 will disable cleaning and keep backups forever. Warning: This can be very space consuming.
CONFLICT_BACKUP_DAYS=30 CONFLICT_BACKUP_DAYS=30
## If the same file exists on both replicas, newer version will be synced. However, if both files have the same timestamp but differ, CONFILCT_PREVALANCE sets winner replica. ## If the same file exists on both replicas, newer version will be synced. However, if both files have the same timestamp but differ, CONFILCT_PREVALANCE sets winner replica.
CONFLICT_PREVALANCE=initiator CONFLICT_PREVALANCE=initiator
## On deletion propagation to the target replica, a backup of the deleted files can be kept. Deletions will be kept in .osync_workdir/deleted ## On deletion propagation to the target replica, a backup of the deleted files can be kept. Deletions will be kept in .osync_workdir/deleted
SOFT_DELETE=true SOFT_DELETE=yes
## Osync will clean deleted files after a given number of days. Setting this to 0 will disable cleaning and keep deleted files forever. Warning: This can be very space consuming. ## Osync will clean deleted files after a given number of days. Setting this to 0 will disable cleaning and keep deleted files forever. Warning: This can be very space consuming.
SOFT_DELETE_DAYS=30 SOFT_DELETE_DAYS=30
## Optional deletion skip on replicas. Valid values are "initiator", "target", or "initiator,target" ## Optional deletion skip on replicas. Valid values are "initiator", "target", or "initiator,target"
SKIP_DELETION= SKIP_DELETION=
## Optional sync type. By default, osync is bidirectional. You may want to use osync as unidirectional sync in some circumstances. Valid values are "initiator2target" or "target2initiator" ## ---------- RESUME OPTIONS
SYNC_TYPE=
[RESUME_OPTIONS]
## Try to resume an aborted sync task ## Try to resume an aborted sync task
RESUME_SYNC=true RESUME_SYNC=yes
## Number maximum resume tries before initiating a fresh sync. ## Number maximum resume tries before initiating a fresh sync.
RESUME_TRY=2 RESUME_TRY=2
## When a pidlock exists on slave replica that does not correspond to the initiator's instance-id, force pidlock removal. Be careful with this option if you have multiple initiators. ## When a pidlock exists on slave replica that does not correspond to the initiator's instance-id, force pidlock removal. Be careful with this option if you have multiple initiators.
FORCE_STRANGER_LOCK_RESUME=false FORCE_STRANGER_LOCK_RESUME=no
## Keep partial uploads that can be resumed on next run, experimental feature ## Keep partial uploads that can be resumed on next run, experimental feature
PARTIAL=false PARTIAL=no
## Use delta copy algortithm (usefull when local paths are network drives), defaults to true ## Use delta copy algortithm (usefull when local paths are network drives), defaults to yes
DELTA_COPIES=true DELTA_COPIES=yes
## ---------- ALERT OPTIONS
[ALERT_OPTIONS]
## List of alert mails separated by spaces ## List of alert mails separated by spaces
## Most Unix systems (including Win10 bash) have mail support out of the box ## Most Unix systems (including Win10 bash) have mail support out of the box
## Just make sure that the current user has enough privileges to use mail / mutt / sendmail and that the mail system is configured to allow outgoing mails ## Just make sure that the current user has enough privileges to use mail / mutt / sendmail and that the mail system is configured to allow outgoing mails
## on pfSense platform, smtp support needs to be configured in System > Advanced > Notifications ## on pfSense platform, smtp support needs to be configured in System > Advanced > Notifications
DESTINATION_MAILS="your@alert.tld" DESTINATION_MAILS="your@alert.tld"
## By default, only sync warnings / errors are sent by mail. This default behavior can be overrided here
ALWAYS_SEND_MAILS=false
## Optional change of mail body encoding (using iconv) ## Optional change of mail body encoding (using iconv)
## By default, all mails are sent in UTF-8 format without header (because of maximum compatibility of all platforms) ## By default, all mails are sent in UTF-8 format without header (because of maximum compatibility of all platforms)
## You may specify an optional encoding here (like "ISO-8859-1" or whatever iconv can handle) ## You may specify an optional encoding here (like "ISO-8859-1" or whatever iconv can handle)
@ -213,9 +188,9 @@ SMTP_ENCRYPTION=none
SMTP_USER= SMTP_USER=
SMTP_PASSWORD= SMTP_PASSWORD=
[EXECUTION_HOOKS] ## ---------- EXECUTION HOOKS
## Commands can will be run before and / or after sync process ## Commands can will be run before and / or after sync process (remote execution will only happen if REMOTE_OPERATION is set).
LOCAL_RUN_BEFORE_CMD="" LOCAL_RUN_BEFORE_CMD=""
LOCAL_RUN_AFTER_CMD="" LOCAL_RUN_AFTER_CMD=""
@ -226,8 +201,8 @@ REMOTE_RUN_AFTER_CMD=""
MAX_EXEC_TIME_PER_CMD_BEFORE=0 MAX_EXEC_TIME_PER_CMD_BEFORE=0
MAX_EXEC_TIME_PER_CMD_AFTER=0 MAX_EXEC_TIME_PER_CMD_AFTER=0
## Stops osync execution if one of the above before commands fail ## Stops osync execution if one of the above commands fail
STOP_ON_CMD_ERROR=true STOP_ON_CMD_ERROR=yes
## Run local and remote after sync commands even on failure ## Run local and remote after sync commands even on failure
RUN_AFTER_CMD_ON_ERROR=false RUN_AFTER_CMD_ON_ERROR=no

View File

@ -2,12 +2,12 @@
PROGRAM="osync instance upgrade script" PROGRAM="osync instance upgrade script"
SUBPROGRAM="osync" SUBPROGRAM="osync"
AUTHOR="(C) 2016-2020 by Orsiris de Jong" AUTHOR="(C) 2016-2017 by Orsiris de Jong"
CONTACT="http://www.netpower.fr/osync - ozy@netpower.fr" CONTACT="http://www.netpower.fr/osync - ozy@netpower.fr"
OLD_PROGRAM_VERSION="v1.0x-v1.2x" OLD_PROGRAM_VERSION="v1.0x-v1.1x"
NEW_PROGRAM_VERSION="v1.3x" NEW_PROGRAM_VERSION="v1.2x"
CONFIG_FILE_REVISION=1.3.0 CONFIG_FILE_VERSION=2017060501
PROGRAM_BUILD=2020012201 PROGRAM_BUILD=2016121101
## type -p does not work on platforms other than linux (bash). If if does not work, always assume output is not a zero exitcode ## type -p does not work on platforms other than linux (bash). If if does not work, always assume output is not a zero exitcode
if ! type "$BASH" > /dev/null; then if ! type "$BASH" > /dev/null; then
@ -41,7 +41,6 @@ RSYNC_EXCLUDE_FROM
PATH_SEPARATOR_CHAR PATH_SEPARATOR_CHAR
SSH_COMPRESSION SSH_COMPRESSION
SSH_IGNORE_KNOWN_HOSTS SSH_IGNORE_KNOWN_HOSTS
SSH_CONTROLMASTER
REMOTE_HOST_PING REMOTE_HOST_PING
REMOTE_3RD_PARTY_HOSTS REMOTE_3RD_PARTY_HOSTS
RSYNC_OPTIONAL_ARGS RSYNC_OPTIONAL_ARGS
@ -70,7 +69,6 @@ CONFLICT_PREVALANCE
SOFT_DELETE SOFT_DELETE
SOFT_DELETE_DAYS SOFT_DELETE_DAYS
SKIP_DELETION SKIP_DELETION
SYNC_TYPE
RESUME_SYNC RESUME_SYNC
RESUME_TRY RESUME_TRY
FORCE_STRANGER_LOCK_RESUME FORCE_STRANGER_LOCK_RESUME
@ -101,11 +99,11 @@ sync-test
${HOME}/backupuser/.ssh/id_rsa ${HOME}/backupuser/.ssh/id_rsa
'' ''
SomeAlphaNumericToken9 SomeAlphaNumericToken9
false no
'' ''
10240 10240
0 0
false no
rsync rsync
'' ''
include include
@ -114,43 +112,41 @@ include
'' ''
'' ''
\; \;
true yes
false no
false no
false
'www.kernel.org www.google.com' 'www.kernel.org www.google.com'
'' ''
true yes
true yes
true yes
true yes
false no
false no
false no
false no
false no
false no
true yes
7200 7200
10600 10600
1801 1801
60 60
7200 7200
false yes
false no
true yes
false no
30 30
initiator initiator
true yes
30 30
'' ''
'' yes
true
2 2
false no
false no
true yes
'' ''
'' ''
alert@your.system.tld alert@your.system.tld
@ -165,8 +161,8 @@ none
'' ''
0 0
0 0
true yes
false no
) )
function Init { function Init {
@ -179,8 +175,7 @@ function Init {
FAILED_DELETE_LIST_FILENAME="-failed-delete-$SYNC_ID" FAILED_DELETE_LIST_FILENAME="-failed-delete-$SYNC_ID"
if [ "${SLAVE_SYNC_DIR:0:6}" == "ssh://" ]; then if [ "${SLAVE_SYNC_DIR:0:6}" == "ssh://" ]; then
# Might also exist from old config file as REMOTE_OPERATION=yes REMOTE_OPERATION="yes"
REMOTE_OPERATION=true
# remove leadng 'ssh://' # remove leadng 'ssh://'
uri=${SLAVE_SYNC_DIR#ssh://*} uri=${SLAVE_SYNC_DIR#ssh://*}
@ -230,6 +225,22 @@ function Usage {
exit 128 exit 128
} }
function CheckEnvironment {
if [ "$REMOTE_OPERATION" == "yes" ]; then
if ! type -p ssh > /dev/null 2>&1
then
Logger "ssh not present. Cannot start sync." "CRITICAL"
return 1
fi
fi
if ! type -p rsync > /dev/null 2>&1
then
Logger "rsync not present. Sync cannot start." "CRITICAL"
return 1
fi
}
function LoadConfigFile { function LoadConfigFile {
local config_file="${1}" local config_file="${1}"
@ -256,134 +267,134 @@ function _RenameStateFilesLocal {
# Make sure there is no ending slash # Make sure there is no ending slash
state_dir="${state_dir%/}/" state_dir="${state_dir%/}/"
if [ -f "${state_dir}master${TREE_CURRENT_FILENAME}" ]; then if [ -f "$state_dir""master"$TREE_CURRENT_FILENAME ]; then
mv -f "${state_dir}master${TREE_CURRENT_FILENAME}" "${state_dir}initiator${TREE_CURRENT_FILENAME}" mv -f "$state_dir""master"$TREE_CURRENT_FILENAME "$state_dir""initiator"$TREE_CURRENT_FILENAME
if [ $? != 0 ]; then if [ $? != 0 ]; then
echo "Error while rewriting ${state_dir}master${TREE_CURRENT_FILENAME}" echo "Error while rewriting "$state_dir"master"$TREE_CURRENT_FILENAME
else else
rewrite=true rewrite=true
fi fi
fi fi
if [ -f "${state_dir}master${TREE_AFTER_FILENAME}" ]; then if [ -f "$state_dir""master"$TREE_AFTER_FILENAME ]; then
mv -f "${state_dir}master${TREE_AFTER_FILENAME}" "${state_dir}initiator${TREE_AFTER_FILENAME}" mv -f "$state_dir""master"$TREE_AFTER_FILENAME "$state_dir""initiator"$TREE_AFTER_FILENAME
if [ $? != 0 ]; then if [ $? != 0 ]; then
echo "Error while rewriting ${state_dir}master${TREE_AFTER_FILENAME}" echo "Error while rewriting "$state_dir"master"$TREE_AFTER_FILENAME
else else
rewrite=true rewrite=true
fi fi
fi fi
if [ -f "${state_dir}master${DELETED_LIST_FILENAME}" ]; then if [ -f "$state_dir""master"$DELETED_LIST_FILENAME ]; then
mv -f "${state_dir}master${DELETED_LIST_FILENAME}" "${state_dir}initiator${DELETED_LIST_FILENAME}" mv -f "$state_dir""master"$DELETED_LIST_FILENAME "$state_dir""initiator"$DELETED_LIST_FILENAME
if [ $? != 0 ]; then if [ $? != 0 ]; then
echo "Error while rewriting ${state_dir}master${DELETED_LIST_FILENAME}" echo "Error while rewriting "$state_dir"master"$DELETED_LIST_FILENAME
else else
rewrite=true rewrite=true
fi fi
rewrite=true rewrite=true
fi fi
if [ -f "${state_dir}master${FAILED_DELETE_LIST_FILENAME}" ]; then if [ -f "$state_dir""master"$FAILED_DELETE_LIST_FILENAME ]; then
mv -f "${state_dir}master${FAILED_DELETE_LIST_FILENAME}" "${state_dir}initiator${FAILED_DELETE_LIST_FILENAME}" mv -f "$state_dir""master"$FAILED_DELETE_LIST_FILENAME "$state_dir""initiator"$FAILED_DELETE_LIST_FILENAME
if [ $? != 0 ]; then if [ $? != 0 ]; then
echo "Error while rewriting ${state_dir}master${FAILED_DELETE_LIST_FILENAME}" echo "Error while rewriting "$state_dir"master"$FAILED_DELETE_LIST_FILENAME
else else
rewrite=true rewrite=true
fi fi
fi fi
if [ -f "${state_dir}master${TREE_CURRENT_FILENAME}-dry" ]; then if [ -f "$state_dir""master"$TREE_CURRENT_FILENAME"-dry" ]; then
mv -f "${state_dir}master${TREE_CURRENT_FILENAME}-dry" "${state_dir}initiator${TREE_CURRENT_FILENAME}-dry" mv -f "$state_dir""master"$TREE_CURRENT_FILENAME"-dry" "$state_dir""initiator"$TREE_CURRENT_FILENAME"-dry"
if [ $? != 0 ]; then if [ $? != 0 ]; then
echo "Error while rewriting ${state_dir}master${TREE_CURRENT_FILENAME}-dry" echo "Error while rewriting "$state_dir"master"$TREE_CURRENT_FILENAME"-dry"
else else
rewrite=true rewrite=true
fi fi
fi fi
if [ -f "${state_dir}master${TREE_AFTER_FILENAME}-dry" ]; then if [ -f "$state_dir""master"$TREE_AFTER_FILENAME"-dry" ]; then
mv -f "${state_dir}master${TREE_AFTER_FILENAME}-dry" "${state_dir}initiator${TREE_AFTER_FILENAME}-dry" mv -f "$state_dir""master"$TREE_AFTER_FILENAME"-dry" "$state_dir""initiator"$TREE_AFTER_FILENAME"-dry"
if [ $? != 0 ]; then if [ $? != 0 ]; then
echo "Error while rewriting ${state_dir}master${TREE_AFTER_FILENAME}-dry" echo "Error while rewriting "$state_dir""master"$TREE_AFTER_FILENAME"
else else
rewrite=true rewrite=true
fi fi
fi fi
if [ -f "${state_dir}master${DELETED_LIST_FILENAME}-dry" ]; then if [ -f "$state_dir""master"$DELETED_LIST_FILENAME"-dry" ]; then
mv -f "${state_dir}master${DELETED_LIST_FILENAME}-dry" "${state_dir}initiator${DELETED_LIST_FILENAME}-dry" mv -f "$state_dir""master"$DELETED_LIST_FILENAME"-dry" "$state_dir""initiator"$DELETED_LIST_FILENAME"-dry"
if [ $? != 0 ]; then if [ $? != 0 ]; then
echo "Error while rewriting ${state_dir}master${DELETED_LIST_FILENAME}-dry" echo "Error while rewriting "$state_dir"master"$DELETED_LIST_FILENAME"-dry"
else else
rewrite=true rewrite=true
fi fi
fi fi
if [ -f "${state_dir}master${FAILED_DELETE_LIST_FILENAME}-dry" ]; then if [ -f "$state_dir""master"$FAILED_DELETE_LIST_FILENAME"-dry" ]; then
mv -f "${state_dir}master${FAILED_DELETE_LIST_FILENAME}-dry" "${state_dir}initiator${FAILED_DELETE_LIST_FILENAME}-dry" mv -f "$state_dir""master"$FAILED_DELETE_LIST_FILENAME"-dry" "$state_dir""initiator"$FAILED_DELETE_LIST_FILENAME"-dry"
if [ $? != 0 ]; then if [ $? != 0 ]; then
echo "Error while rewriting ${state_dir}master${FAILED_DELETE_LIST_FILENAME}-dry" echo "Error while rewriting "$state_dir"master"$FAILED_DELETE_LIST_FILENAME"-dry"
else else
rewrite=true rewrite=true
fi fi
fi fi
if [ -f "${state_dir}slave${TREE_CURRENT_FILENAME}" ]; then if [ -f "$state_dir""slave"$TREE_CURRENT_FILENAME ]; then
mv -f "${state_dir}slave${TREE_CURRENT_FILENAME}" "${state_dir}target${TREE_CURRENT_FILENAME}" mv -f "$state_dir""slave"$TREE_CURRENT_FILENAME "$state_dir""target"$TREE_CURRENT_FILENAME
if [ $? != 0 ]; then if [ $? != 0 ]; then
echo "Error while rewriting ${state_dir}slave${TREE_CURRENT_FILENAME}" echo "Error while rewriting "$state_dir"slave"$TREE_CURRENT_FILENAME
else else
rewrite=true rewrite=true
fi fi
fi fi
if [ -f "${state_dir}slave${TREE_AFTER_FILENAME}" ]; then if [ -f "$state_dir""slave"$TREE_AFTER_FILENAME ]; then
mv -f "${state_dir}slave${TREE_AFTER_FILENAME}" "${state_dir}target${TREE_AFTER_FILENAME}" mv -f "$state_dir""slave"$TREE_AFTER_FILENAME "$state_dir""target"$TREE_AFTER_FILENAME
if [ $? != 0 ]; then if [ $? != 0 ]; then
echo "Error while rewriting ${state_dir}slave${TREE_AFTER_FILENAME}" echo "Error while rewriting "$state_dir"slave"$TREE_AFTER_FILENAME
else else
rewrite=true rewrite=true
fi fi
fi fi
if [ -f "${state_dir}slave${DELETED_LIST_FILENAME}" ]; then if [ -f "$state_dir""slave"$DELETED_LIST_FILENAME ]; then
mv -f "${state_dir}slave${DELETED_LIST_FILENAME}" "${state_dir}target${DELETED_LIST_FILENAME}" mv -f "$state_dir""slave"$DELETED_LIST_FILENAME "$state_dir""target"$DELETED_LIST_FILENAME
if [ $? != 0 ]; then if [ $? != 0 ]; then
echo "Error while rewriting ${state_dir}slave${DELETED_LIST_FILENAME}" echo "Error while rewriting "$state_dir"slave"$DELETED_LIST_FILENAME
else else
rewrite=true rewrite=true
fi fi
fi fi
if [ -f "${state_dir}slave${FAILED_DELETE_LIST_FILENAME}" ]; then if [ -f "$state_dir""slave"$FAILED_DELETE_LIST_FILENAME ]; then
mv -f "${state_dir}slave${FAILED_DELETE_LIST_FILENAME}" "${state_dir}target${FAILED_DELETE_LIST_FILENAME}" mv -f "$state_dir""slave"$FAILED_DELETE_LIST_FILENAME "$state_dir""target"$FAILED_DELETE_LIST_FILENAME
if [ $? != 0 ]; then if [ $? != 0 ]; then
echo "Error while rewriting ${state_dir}slave${FAILED_DELETE_LIST_FILENAME}" echo "Error while rewriting "$state_dir"slave"$FAILED_DELETE_LIST_FILENAME
else else
rewrite=true rewrite=true
fi fi
fi fi
if [ -f "${state_dir}slave${TREE_CURRENT_FILENAME}-dry" ]; then if [ -f "$state_dir""slave"$TREE_CURRENT_FILENAME"-dry" ]; then
mv -f "${state_dir}slave${TREE_CURRENT_FILENAME}-dry" "${state_dir}target${TREE_CURRENT_FILENAME}-dry" mv -f "$state_dir""slave"$TREE_CURRENT_FILENAME"-dry" "$state_dir""target"$TREE_CURRENT_FILENAME"-dry"
if [ $? != 0 ]; then if [ $? != 0 ]; then
echo "Error while rewriting ${state_dir}slave${TREE_CURRENT_FILENAME}-dry" echo "Error while rewriting "$state_dir"slave"$TREE_CURRENT_FILENAME"-dry"
else else
rewrite=true rewrite=true
fi fi
fi fi
if [ -f "${state_dir}slave${TREE_AFTER_FILENAME}-dry" ]; then if [ -f "$state_dir""slave"$TREE_AFTER_FILENAME"-dry" ]; then
mv -f "${state_dir}slave${TREE_AFTER_FILENAME}-dry" "${state_dir}target${TREE_AFTER_FILENAME}-dry" mv -f "$state_dir""slave"$TREE_AFTER_FILENAME"-dry" "$state_dir""target"$TREE_AFTER_FILENAME"-dry"
if [ $? != 0 ]; then if [ $? != 0 ]; then
echo "Error while rewriting ${state_dir}slave${TREE_AFTER_FILENAME}-dry" echo "Error while rewriting "$state_dir"slave"$TREE_AFTER_FILENAME"-dry"
else else
rewrite=true rewrite=true
fi fi
fi fi
if [ -f "${state_dir}slave${DELETED_LIST_FILENAME}-dry" ]; then if [ -f "$state_dir""slave"$DELETED_LIST_FILENAME"-dry" ]; then
mv -f "${state_dir}slave${DELETED_LIST_FILENAME}-dry" "${state_dir}target${DELETED_LIST_FILENAME}-dry" mv -f "$state_dir""slave"$DELETED_LIST_FILENAME"-dry" "$state_dir""target"$DELETED_LIST_FILENAME"-dry"
if [ $? != 0 ]; then if [ $? != 0 ]; then
echo "Error while rewriting ${state_dir}slave${DELETED_LIST_FILENAME}-dry" echo "Error while rewriting "$state_dir"slave"$DELETED_LIST_FILENAME"-dry"
else else
rewrite=true rewrite=true
fi fi
fi fi
if [ -f "${state_dir}slave${FAILED_DELETE_LIST_FILENAME}-dry" ]; then if [ -f "$state_dir""slave"$FAILED_DELETE_LIST_FILENAME"-dry" ]; then
mv -f "${state_dir}slave${FAILED_DELETE_LIST_FILENAME}-dry" "${state_dir}target${FAILED_DELETE_LIST_FILENAME}-dry" mv -f "$state_dir""slave"$FAILED_DELETE_LIST_FILENAME"-dry" "$state_dir""target"$FAILED_DELETE_LIST_FILENAME"-dry"
if [ $? != 0 ]; then if [ $? != 0 ]; then
echo "Error while rewriting ${state_dir}slave${FAILED_DELETE_LIST_FILENAME}-dry" echo "Error while rewriting "$state_dir"slave"$FAILED_DELETE_LIST_FILENAME"-dry"
else else
rewrite=true rewrite=true
fi fi
@ -406,24 +417,24 @@ $SSH_CMD state_dir="${1}" DELETED_LIST_FILENAME="$DELETED_LIST_FILENAME" FAILED_
state_dir="${state_dir%/}/" state_dir="${state_dir%/}/"
rewrite=false rewrite=false
if [ -f "${state_dir}master${DELETED_LIST_FILENAME}" ]; then if [ -f "$state_dir""master"$DELETED_LIST_FILENAME ]; then
mv -f "${state_dir}master${DELETED_LIST_FILENAME}" "${state_dir}initiator${DELETED_LIST_FILENAME}" mv -f "$state_dir""master"$DELETED_LIST_FILENAME "$state_dir""initiator"$DELETED_LIST_FILENAME
if [ $? != 0 ]; then if [ $? != 0 ]; then
echo "Error while rewriting "$state_dir"master"$DELETED_LIST_FILENAME echo "Error while rewriting "$state_dir"master"$DELETED_LIST_FILENAME
else else
rewrite=true rewrite=true
fi fi
fi fi
if [ -f "${state_dir}master${FAILED_DELETE_LIST_FILENAME}" ]; then if [ -f "$state_dir""master"$FAILED_DELETE_LIST_FILENAME ]; then
mv -f "${state_dir}master${FAILED_DELETE_LIST_FILENAME}" "${state_dir}initiator${FAILED_DELETE_LIST_FILENAME}" mv -f "$state_dir""master"$FAILED_DELETE_LIST_FILENAME "$state_dir""initiator"$FAILED_DELETE_LIST_FILENAME
if [ $? != 0 ]; then if [ $? != 0 ]; then
echo "Error while rewriting "$state_dir"master"$FAILED_DELETE_LIST_FILENAME echo "Error while rewriting "$state_dir"master"$FAILED_DELETE_LIST_FILENAME
else else
rewrite=true rewrite=true
fi fi
fi fi
if [ -f "${state_dir}master${FAILED_DELETE_LIST_FILENAME}-dry" ]; then if [ -f "$state_dir""master"$FAILED_DELETE_LIST_FILENAME"-dry" ]; then
mv -f "${state_dir}master${FAILED_DELETE_LIST_FILENAME}-dry" "${state_dir}initiator${FAILED_DELETE_LIST_FILENAME}-dry" mv -f "$state_dir""master"$FAILED_DELETE_LIST_FILENAME"-dry" "$state_dir""initiator"$FAILED_DELETE_LIST_FILENAME"-dry"
if [ $? != 0 ]; then if [ $? != 0 ]; then
echo "Error while rewriting "$state_dir"master"$FAILED_DELETE_LIST_FILENAME"-dry" echo "Error while rewriting "$state_dir"master"$FAILED_DELETE_LIST_FILENAME"-dry"
else else
@ -441,14 +452,14 @@ ENDSSH
function RenameStateFiles { function RenameStateFiles {
_RenameStateFilesLocal "$MASTER_SYNC_DIR/$OSYNC_DIR/$STATE_DIR" _RenameStateFilesLocal "$MASTER_SYNC_DIR/$OSYNC_DIR/$STATE_DIR"
if [ "$REMOTE_OPERATION" != "yes" ] || "$REMOTE_OPERATION" == true ]; then if [ "$REMOTE_OPERATION" != "yes" ]; then
_RenameStateFilesLocal "$SLAVE_SYNC_DIR/$OSYNC_DIR/$STATE_DIR" _RenameStateFilesLocal "$SLAVE_SYNC_DIR/$OSYNC_DIR/$STATE_DIR"
else else
_RenameStateFilesRemote "$SLAVE_SYNC_DIR/$OSYNC_DIR/$STATE_DIR" _RenameStateFilesRemote "$SLAVE_SYNC_DIR/$OSYNC_DIR/$STATE_DIR"
fi fi
} }
function CheckAndBackup { function RewriteOldConfigFiles {
local config_file="${1}" local config_file="${1}"
if ! grep "MASTER_SYNC_DIR=" "$config_file" > /dev/null && ! grep "INITIATOR_SYNC_DIR=" "$config_file" > /dev/null; then if ! grep "MASTER_SYNC_DIR=" "$config_file" > /dev/null && ! grep "INITIATOR_SYNC_DIR=" "$config_file" > /dev/null; then
@ -462,10 +473,6 @@ function CheckAndBackup {
echo "Cannot backup config file." echo "Cannot backup config file."
exit 1 exit 1
fi fi
}
function RewriteOldConfigFiles {
local config_file="${1}"
echo "Rewriting config file $config_file" echo "Rewriting config file $config_file"
@ -481,7 +488,7 @@ function RewriteOldConfigFiles {
rm -f "$config_file.tmp" rm -f "$config_file.tmp"
} }
function AddMissingConfigOptionsAndFixBooleans { function AddMissingConfigOptions {
local config_file="${1}" local config_file="${1}"
local counter=0 local counter=0
@ -489,69 +496,27 @@ function AddMissingConfigOptionsAndFixBooleans {
if ! grep "^${KEYWORDS[$counter]}=" > /dev/null "$config_file"; then if ! grep "^${KEYWORDS[$counter]}=" > /dev/null "$config_file"; then
echo "${KEYWORDS[$counter]} not found" echo "${KEYWORDS[$counter]} not found"
if [ $counter -gt 0 ]; then if [ $counter -gt 0 ]; then
if [ "${VALUES[$counter]}" == true ] || [ "${VALUES[$counter]}" == false ]; then
sed -i'.tmp' '/^'${KEYWORDS[$((counter-1))]}'=*/a\'$'\n'${KEYWORDS[$counter]}'='"${VALUES[$counter]}"'\'$'\n''' "$config_file"
else
sed -i'.tmp' '/^'${KEYWORDS[$((counter-1))]}'=*/a\'$'\n'${KEYWORDS[$counter]}'="'"${VALUES[$counter]}"'"\'$'\n''' "$config_file" sed -i'.tmp' '/^'${KEYWORDS[$((counter-1))]}'=*/a\'$'\n'${KEYWORDS[$counter]}'="'"${VALUES[$counter]}"'"\'$'\n''' "$config_file"
fi
if [ $? -ne 0 ]; then if [ $? -ne 0 ]; then
echo "Cannot add missing ${[KEYWORDS[$counter]}." echo "Cannot add missing ${[KEYWORDS[$counter]}."
exit 1 exit 1
fi fi
else else
if [ "${VALUES[$counter]}" == true ] || [ "${VALUES[$counter]}" == false ]; then sed -i'.tmp' '/onfig file rev*/a\'$'\n'${KEYWORDS[$counter]}'="'"${VALUES[$counter]}"'"\'$'\n''' "$config_file"
sed -i'.tmp' '/[GENERAL\]$//a\'$'\n'${KEYWORDS[$counter]}'='"${VALUES[$counter]}"'\'$'\n''' "$config_file"
else
sed -i'.tmp' '/[GENERAL\]$//a\'$'\n'${KEYWORDS[$counter]}'="'"${VALUES[$counter]}"'"\'$'\n''' "$config_file"
fi
fi fi
echo "Added missing ${KEYWORDS[$counter]} config option with default option [${VALUES[$counter]}]" echo "Added missing ${KEYWORDS[$counter]} config option with default option [${VALUES[$counter]}]"
else
# Not the most elegant but the quickest way :)
if grep "^${KEYWORDS[$counter]}=yes$" > /dev/null "$config_file"; then
sed -i'.tmp' 's/^'${KEYWORDS[$counter]}'=.*/'${KEYWORDS[$counter]}'=true/g' "$config_file"
if [ $? -ne 0 ]; then
echo "Cannot rewrite ${[KEYWORDS[$counter]} boolean to true."
exit 1
fi
elif grep "^${KEYWORDS[$counter]}=no$" > /dev/null "$config_file"; then
sed -i'.tmp' 's/^'${KEYWORDS[$counter]}'=.*/'${KEYWORDS[$counter]}'=false/g' "$config_file"
if [ $? -ne 0 ]; then
echo "Cannot rewrite ${[KEYWORDS[$counter]} boolean to false."
exit 1
fi
fi
fi fi
counter=$((counter+1)) counter=$((counter+1))
done done
} }
function RewriteSections {
local config_file="${1}"
sed -i'.tmp' 's/## ---------- GENERAL OPTIONS/[GENERAL]/g' "$config_file"
sed -i'.tmp' 's/## ---------- REMOTE OPTIONS/[REMOTE_OPTIONS]/g' "$config_file"
sed -i'.tmp' 's/## ---------- REMOTE SYNC OPTIONS/[REMOTE_OPTIONS]/g' "$config_file"
sed -i'.tmp' 's/## ---------- MISC OPTIONS/[MISC_OPTIONS]/g' "$config_file"
sed -i'.tmp' 's/## ---------- BACKUP AND DELETION OPTIONS/[BACKUP_DELETE_OPTIONS]/g' "$config_file"
sed -i'.tmp' 's/## ---------- BACKUP AND TRASH OPTIONS/[BACKUP_DELETE_OPTIONS]/g' "$config_file"
sed -i'.tmp' 's/## ---------- RESUME OPTIONS/[RESUME_OPTIONS]/g' "$config_file"
sed -i'.tmp' 's/## ---------- ALERT OPTIONS/[ALERT_OPTIONS]/g' "$config_file"
sed -i'.tmp' 's/## ---------- EXECUTION HOOKS/[EXECUTION_HOOKS]/g' "$config_file"
}
function UpdateConfigHeader { function UpdateConfigHeader {
local config_file="${1}" local config_file="${1}"
if ! grep "^CONFIG_FILE_REVISION=" > /dev/null "$config_file"; then
if grep "\[GENERAL\]" > /dev/null "$config_file"; then
sed -i'.tmp' '/^\[GENERAL\]$/a\'$'\n'CONFIG_FILE_REVISION=$CONFIG_FILE_REVISION$'\n''' "$config_file"
else
sed -i'.tmp' '/.*onfig file rev.*/a\'$'\n'CONFIG_FILE_REVISION=$CONFIG_FILE_REVISION$'\n''' "$config_file"
fi
# "onfig file rev" to deal with earlier variants of the file where c was lower or uppercase # "onfig file rev" to deal with earlier variants of the file where c was lower or uppercase
sed -i'.tmp' 's/.*onfig file rev.*//' "$config_file" #sed -i'.tmp' '/onfig file rev/c\###### '$SUBPROGRAM' config file rev '$CONFIG_FILE_VERSION' '$NEW_PROGRAM_VERSION "$config_file"
fi sed -i'.tmp' 's/.*onfig file rev.*/##### '$SUBPROGRAM' config file rev '$CONFIG_FILE_VERSION' '$NEW_PROGRAM_VERSION'/' "$config_file"
rm -f "$config_file.tmp"
} }
_QUICK_SYNC=0 _QUICK_SYNC=0
@ -561,11 +526,11 @@ do
case $i in case $i in
--master=*) --master=*)
MASTER_SYNC_DIR=${i##*=} MASTER_SYNC_DIR=${i##*=}
_QUICK_SYNC=$((_QUICK_SYNC + 1)) _QUICK_SYNC=$(($_QUICK_SYNC + 1))
;; ;;
--slave=*) --slave=*)
SLAVE_SYNC_DIR=${i##*=} SLAVE_SYNC_DIR=${i##*=}
_QUICK_SYNC=$((_QUICK_SYNC + 1)) _QUICK_SYNC=$(($_QUICK_SYNC + 1))
;; ;;
--rsakey=*) --rsakey=*)
SSH_RSA_PRIVATE_KEY=${i##*=} SSH_RSA_PRIVATE_KEY=${i##*=}
@ -587,19 +552,11 @@ elif [ "$1" != "" ] && [ -f "$1" ] && [ -w "$1" ]; then
CONF_FILE="${CONF_FILE%/}" CONF_FILE="${CONF_FILE%/}"
LoadConfigFile "$CONF_FILE" LoadConfigFile "$CONF_FILE"
Init Init
CheckAndBackup "$CONF_FILE"
RewriteSections "$CONF_FILE"
RewriteOldConfigFiles "$CONF_FILE" RewriteOldConfigFiles "$CONF_FILE"
AddMissingConfigOptionsAndFixBooleans "$CONF_FILE" AddMissingConfigOptions "$CONF_FILE"
UpdateConfigHeader "$CONF_FILE" UpdateConfigHeader "$CONF_FILE"
if [ -d "$MASTER_SYNC_DIR" ]; then
RenameStateFiles "$MASTER_SYNC_DIR" RenameStateFiles "$MASTER_SYNC_DIR"
fi
if [ -d "$SLAVE_SYNC_DIR" ]; then
RenameStateFiles "$SLAVE_SYNC_DIR" RenameStateFiles "$SLAVE_SYNC_DIR"
fi
rm -f "$CONF_FILE.tmp"
echo "Configuration file upgrade finished."
else else
Usage Usage
fi fi