Motion - Image Culling And Watchdog Scripts

Motion Script Collection - Image Culling and Watchdog

Introduction

This is a collection of scripts that I've thrown together to support my full-time security monitoring system.

Some of the scripts are in Bash, and some are in Perl.

It includes some web-wrappers which I use to view captured images and system status via a web browser (and Apache, which runs on the server), watchdog scripts to make sure the Motion process is alive and well, and a culler script which deletes old capture data in order to keep disk space within some defined utilization range.

The culler script differs from using a simple Unix "find" command that removes files older than a certain date. Instead, the culler uses a threshold of disk utilization, and deletes only the oldest files required to keep the disk availability within specification.

Detailed Description

These files are my "custom" layout options for Motion.

I run a number of cameras, and want to be able to view the images, as well as system status over the web. Additionally, since I started with a system that was, well, unstable at best, I wanted it to be self-maintaining in the event of system crash or something. By now, the OS and motion are both quite stable, but these scripts still give me some peace of mind.

The culler program maintains a set amount of free disk space, so as not to allow the drive to fill up and/or the OS to crash. It uses a convoluted set of Unix "find" and sort commands. I run it from a crontab, and it figures out which files to delete based upon avaiable disk space and age of the files.

The culler program source has a lot of internal documentation on how to set it up.

Here's how I have my system organized:

/home/motion home dir for the motion user
/home/motion/motion distribution dir for the motion source
/home/motion/logs script log directory
/home/motion/capture_root root directory for captured images and mpgs
/home/motion/support_scripts directory for all my scripts
/home/motion/docs where I leave documentation for myself... like this file

File in this archive:
docs/README.txt this information, in plain text.
docs/sample_crontab a sample of the crontab I have for the motion user
support_scripts/culler.pl the program that actually culls the older images
support_scripts/monitorMotion.bash a script that confirms Motion is running, and starts it if necessary.
support_scripts/runCuller.bash the script that gets called from my crontab to run the culler script, if it's not still running from before.
webweb-server related items
web/capture_root I have here a few files, like a style sheet, a README, and a FOOTER that are used by Apache to wrap the directory listings.
web/cgi-bin/stats.pla simple CGI that dumps a bunch of useful status information

Attached Files

You can grab the files at http://www.fogbound.net/motion_scripts.tar.gz

Installation

This package comprises several scripts. They're installable separately, or as a full set.

In any case, open up the archive using "tar":

tar xzvf motion_scripts.tar.gz

The Watchdog

This is a simple watchdog script that will restart Motion if it ever fails for some reason.

To install it, copy the script from support_scripts/monitorMotion.bash to whereever you want the script to live. I put mine in /home/motion/support_scripts.

Check to see where your Motion executable is. You can type "which motion" from the command line, and get the path. Typically, it installs as /usr/local/bin/motion. If it's different on your system, edit monitorMotion.bash, and replace that line with one that matches the way your system is set up.

Edit your crontab to run monitorMotion. You can do this by logging in as the Motion account (or whatever account you run Motion from), and typing "crontab -e". In the editor, place a line like:

0,15,30,45 * * * * /home/motion/support_scripts/monitorMotion.bash >> /home/motion/logs/monitorMotion.log 2>/dev/null

In this example, the monitor script will run every fifteen minutes, and will write a log in /home/motion/logs. You will have to change the paths to the relevant files if they are different on your system.

If you want to monitor the Motion process more or less frequently, you can change the parameters in your crontab. Entering "0-59/2" for example, will have it check every two minutes.

Once you save your crontab, you're done. The script will run on schedule, and restart Motion if it's not running.

The Culler

To install the culler, place "culler.pl" and "runCuller.bash" in some handy directory. I put them in /home/motion/support_scripts on my machine.

First, edit runCuller.bash. Change the second line from:

cd /home/motion/support_scripts

to go to the directory where you have culler.pl installed.

Next, edit culler.pl.

There are a lot of variables up near the top of the file. They should be well commented. Those comments are repeated here:

#Filesystems to monitor. If you'd like to monitor more than one,
# separate them with commas.
# @filesystems = ("/dev/hda0","/dev/hda1");

@filesystems = ("/dev/hda0");
# Directories to cull. If space gets tight, which directories
# should we cull old files from? Note that these are the tops of
# the hierarchies -- any files in directories below this point
# are eligible for deletion!
# You can list multiple directories, but if they're on the same
# partition, it's possible that only the first will get culled.
# @culldirs = ("/home/motion/capture/front_door","/home/motion/capture/back_yard");

@culldirs = ("/home/motion/capture_root/capture/");

# preservedirs are directories which will not be deleted, even if they
# are empty. This way, Motion won't lose a place to put the data.
@preservedirs = ("/home/motion/capture_root/capture/cam1","/home/motion/capture_root/capture/cam2",
"/home/motion/capture_root/capture/cam3","/home/motion/capture_root/capture/cam4",
"/home/motion/capture_root/capture/cam5","/home/motion/capture_root/capture/cam6");
# Trigger threshold. How full should a partition be allowed to get
# (interpreted as a percentage) before culling begins? Must be
# greater than 0!
# $threshold = 80;

$threshold = 80;

#
# Be noisy?
# Set this nonzero to output telemetry to STDOUT. If you run in test mode,
# this will be over-ruled, and set to true.
$verbose = 1;

Test the culler by running it in test mode. THIS IS IMPORTANT, since the culler DELETES FILES! Run the culler from the command line:

./culler.pl -t

For testing purposes, you may wish to modify the value of $threshold to something low (e.g., less than the current target utilization) to see what files it would have deleted if it were run normally. STUDY THE OUTPUT! If it contains files that are not Motion capture output, change your configuration! You don't want culler to delete important directories or file when space runs low!

When you are satisfied that culler is properly configured, you can add the runCuller script to your crontab. You can do this by logging in as the Motion account (or whatever account you run Motion from), and typing "crontab -e". In the editor, place a line like:

30 0,6,12,18 * * * /home/motion/support_scripts/runCuller.bash >> /home/motion/logs/culler.log 2>/dev/null

In this example, the culler will run at half-past midnight, 6:30AM, half-past noon, and 6:30PM, and will write a log in /home/motion/logs. You will have to change the paths to the relevant files if they are different on your system.

The culler can take a very long time to run on very large disks, so having it run more frequently may not actually help. The runCuller.bash script will prevent the culler from being run multiple times simultaneously (which would load up the system and not do anything very productive), so there's no risk involved in increasing the frequency, but if you're running into disk space issues, you're probably better off just reducing $threshold in the culler.

The Web Wrappers

(documentation coming soon)

Users Guide

Once the scripts are set up, there's not really any further interaction required.

Comments and Bug Reports


Kenneth came by and changed the HTML to TWiki Shorthand Language and attached the tar.gz to the topic.

-- KennethLavrsen - 05 Oct 2004

When I tried to download this the default filename ends with tar.tar instead of tar.gz. I manually changed the extention of my downloaded file and it subsequently gunzips and untars fine.

-Bruce.

-- BruceDurham - 13 Oct 2004

Bruce, maybe is a yours program/browser issue, I downloaded right now with Mozilla Firefox, and the filename is correct.

-- MarcoCarvalho - 13 Oct 2004

Answer: Yes it is a browser issue. Internet Explorer does this all the time when I download tar.gz files. Also from other sites. -- KennethLavrsen - 14 Oct 2004

When you call the bash script with full path and this path contain the word motion, the command `ps aux | grep motion | wc -l` returns more than 1 occurrences even if motion isn't started; because in the ps output it has ... /bin/bash /home/motion/support_scripts/monitorMotion.bash

So to avoid bug, I changed the ps command to `ps aux | grep motion | grep -v /bin/bash | wc -l`. BTW, don't forget to put your bash path after the grep -v

-- PascalRheaume - 26 Aug 2008

I get the following:
rmdir: missing operand
Try `rmdir --help' for more information.
Issued: find /home/motion/cam1/ -type d -ctime +1 -not \( -path "/home/motion/capture_root/capture/cam1" -or -path "/home/motion/capture_root/capture/cam2" -or -path "/home/motion/capture_root/capture/cam3" -or -path "/home/motion/capture_root/capture/cam4" -or -path "/home/motion/capture_root/capture/cam5" -or -path "/home/motion/capture_root/capture/cam6" -or -false \) -print0 | sort -rz | xargs -0 rmdir

-- DavidPickard - 26 Jul 2010

I had problems with the ps based check. The problem is that the ps | grep inconsistently reports itself. I used a pgrep based script. This works for me.

#!/bin/bash
# Checks if a process if running. If not, it tries to restart
# D Conway 2010  ver 1.0

# Variable Section
LogFileDir="/var/log/user.log"
ProcessToCheck="motion"
RestartCommand="/usr/bin/motion"

# Program  Section
if pgrep $ProcessToCheck &>/dev/null
then
        echo `date` Motion OK.
else
        echo `date` RESTARTING MOTION
        $RestartCommand
fi

-- DazzConway - 13 Nov 2010

The file culler script has some flaws that are not obvious.

If the USEARGS option is set to 1, then no files are deleted until the length of the list of files excedes $MAXCOMMANDLINE.

The delete command is in a loop which keeps deleting a block of files until the length of the list is less than the $MAXCOMMANDLINE.

If you are trying to cull old files from a directory containing a small number of large files, the space used will excede the threshold and may fill up the disk. In this case, make $MAXCOMMANDLINE a smaller number (eg. 100). Note the length of the file names also makes a difference.

I have got around this problem by setting $USEARGS = 0. The alternative logic deletes files individually, so $COMMANDLINE has no effect.

If you are trying to cull files from multiple directories on the same file system, this script will not work very well. It will cull files from the first directory until the threshold is reached, then leave the remaining directories untouched. You could end up with very new files being culled while another directory retains old files.

To fix this problem needs the logic changed as follows:

Make a list of all directories to be culled on the same file system.

Use the "find" command to make a list of all files in the cull directories.

Sort the file list by age.

Delete the oldest files until the threshold is reached.

This will delete the oldest files across multiple directories.

I have put all my images into one directory and used the file naming options to differenciate between them. This avoids the need to modify the cull script to achieve the desired result. My cull script only acts on one directory with no sub-directories.

-- DazzConway - 23 Dec 2010

Hi

I rewrote Culler because the original just didn't work for me. Here it is:

{code} #!/usr/bin/perl # =========================================================== # Partition Space Monitor and Culler # "The Culler of Authority" # Version 1.0 SjG 28 Dec 2002 # samuelg@fogbound.net # Version 1.2 Darren Conway 3 Dec 2010 # Does not remove directories # Version 1.3 Darren Conway 22 Dec 2010 # remove USEARGS option and code # Version 1.5 Darren Conway 24 Dec 2010 # complete rewrite of logic. Deletes files across multiple # dirs on one named filesystem. Does not do multiple filesystems. # does not delete sub directories. # =========================================================== # *WARNING* *WARNING* *WARNING* *WARNING* # # THIS PROGRAM DELETES FILES! THIS PROGRAM COULD DELETE IMPORTANT # FILES IF NOT CONFIGURED CORRECTLY! # # *WARNING* *WARNING* *WARNING* *WARNING* # # This is built based upon the output of a few utility # programs, which should be fairly standard, such as "df" and # "find". It's all built based on the output from these programs # as they are distributed as part of the Debian Woody release. # # While the format of the output from these programs should be # relatively similar across distros (especially if you use the GNU # versions), they may vary, so use this program with caution! # # IF THE FORMATS ARE DIFFERENT, THIS COULD CAUSE THIS # PROGRAM TO DELETE IMPORTANT FILES! # # ALWAYS run with the "-t" flag the first time to make sure your # configuration is correct, and to avoid deleting important data! # =========================================================== # # =========================================================== # Configuration: # # Filesystems to monitor. If you'd like to monitor more than one, # separate them with commas. # place filesystems into an array. The file system has to be # the one with the directories to cull.

#The filesystem with the directories to be culled.

$cullfilesystem = "/dev/mapper/kartcam-root";

# Directories to cull. If space gets tight, which directories # should we cull old files from? Note that these are the tops of # the hierarchies -- any files in directories below this point # are eligible for deletion! # You can list multiple directories, but they must be on the same # file system. Directories are separated by whitespace.

$culldir = "/home/darren/images";

# Trigger threshold. How full should a partition be allowed to get # (interpreted as a percentage) before culling begins? Must be # greater than 0!

$threshold = 50;

# # Be noisy? # Set this nonzero to output telemetry to STDOUT. If you run in test mode, # this will be over-ruled, and set to true. $verbose = 0;

# --------------------------------------------------------------- # Nothing to see beyond this point, unless you need to change # the code to work with other output formats of Unix commands. # # Oh -- and while you're thinking about it, put me in a crontab! # ---------------------------------------------------------------

# What command should I use for "df" ? $DFCOMMAND = "df -P -k";

# --------------------------------------------------------------- # This is really the end of configurable stuff. Really. Well, # unless you want to tweak the code, that is. # --------------------------------------------------------------- # snag any arguments *theArgs = &grabArgs(@ARGV);

# test implies verbose if ($theArgs{'t'}) { print "culler.pl VERBOSE TEST MODE \n"; $verbose = 1; } # $theArgs

# If cull needed, 1 # initiate to zero (not needed) $CullNeeded = 0;

# test to see if it's time to cull if ( $verbose ) { print "Test to see if cull needed. \n \n"; }

my @df = split(/\n/,`$DFCOMMAND`); # Use the DFcommand to get the file details

foreach $thisLine (@df) { if ($thisLine =~ /^Filesystem/i) { next; } @thisFilesystem = split(/\s+/,$thisLine); $thisFs = $thisFilesystem[0];

# populate a bunch of stuff about this filesystem $blocks{$thisFs} = $thisFilesystem[1]; $used{$thisFs} = $thisFilesystem[2]; $availableCapacity{$thisFs} = $thisFilesystem[3]; ($utilized{$thisFs} = $thisFilesystem[4]) =~ s/%//; $mountPoint{$thisFs} = $thisFilesystem[5]; $device{$thisFilesystem[5]} = $thisFs; $deviceNo{stat($thisFilesystem[5])} = $thisFs; if ($utilized{$thisFs} > 0) # if utilised > 0 then calculate percentage used { if ($verbose) # Print details of file system and usage { print "\n This filesystem : $thisFilesystem[0] \n"; print "Block capacity : $thisFilesystem[1] \n"; print "Blocks used : $used{$thisFs} \n"; print "Blocks available : $availableCapacity{$thisFs} \n"; print "Blocks utilised : $utilized{$thisFs} % \n"; print "Mount point : $mountPoint{$thisFs}\n"; }

# Calculate the blocks / percent of diskspace used $blocksPerPercent = $used{$thisFs} + $availableCapacity{$thisFs}; # calculated as a decimal $utilized{$thisFs} = $used{$thisFs} / ( $used{$thisFs} + $availableCapacity{$thisFs} );

if ( $cullfilesystem eq $thisFilesystem[0] ) { if ($verbose) { print "Found filesystem match : $cullfilesystem\n"; } if ($utilized{$thisFs} > ( $threshold / 100)) { $toRemove{$thisFs} = ($utilized{$thisFs} - $threshold / 100) * $blocksPerPercent; if ($verbose) { print "Threshold exceeded, Cull is needed\n"; print "There are $toRemove{$thisFs} blocks to be culled from $thisFs.\n"; } # if($verbose) @fileList = split(/\n/, `find $culldir -type f -printf '%T@ %k %p\n' | sort -n`);

# remove files loop $rmKilobytes = 0; # initiate variables $index = 0; $removeList = ""; while ($rmKilobytes < $toRemove{$thisFs} && $index < $#fileList) { ($timestamp, $size, $name) = split(/\s/,$fileList[$index]); $rmKilobytes += $size; # add sized of file to tagged list if ($verbose) { print "removing $name\n"; }

if ($theArgs{'t'}) { print "(Testmode. No command issued.)\n"; } else { unlink $name or warn "Could not unlink: $name \n" ; # issue DELETE files command } # if ($theArgs{'t})

$index++; # inc loop counter } #while } # if ($utilized{$thisFs} > ( $threshold / 100)) } #if ( $cullfilesystem eq $thisFilesystem[0] ) } # if ($utilized{thisFs} > 0)

} # for each $thisLine (@df)

exit(0);

sub grabArgs { my (@args) = @_; %tmp = (); my ($last) = ''; foreach $this (@args) { if (substr($this,0,1) eq '-') { if (length($last) > 0) { $tmp{$last} = $last; $last = substr($this,1); } else { $last = substr($this,1); } } else { if (length($last) > 0) { $tmp{$last} = $this; $last = ''; } else { $tmp{'none'} .= " ".$this; } } } if (length($last) > 0) { $tmp{$last} = $last; }

return *tmp; }

1;

{/code}

I call the following script from a crontab job:

{code}

#!/bin/bash # Culls oldest files when free disk space falls below a threshold # D Conway 2010 ver 1.0

if [ ! `pgrep culler.pl` ] then echo `date` runCuller.bash Running culler: /home/darren/support_scripts/culler.pl echo `date` runCuller.bash Culler completed. else echo `date` runCuller.bash Culler already running. fi {/code}

Enjoy

Dazz

-- DazzConway - 03 Jul 2011


RelatedProjectsForm edit

ProjectSummary A collection of Perl and Bash scripts to make sure Motion is running happily, to clean up old images, and a sample CGI to show server status
ProjectStatus Stable
ReleaseVersion
ProjectSubmitter SamuelGoldstein
Topic revision: r15 - 21 Mar 2012, DavidArmand
Copyright © 1999-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Please do not email Kenneth for support questions (read why). Use the Support Requests page or join the Mailing List.
This website only use harmless session cookies. See Cookie Policy for details. By using this website you accept the use of these cookies.