Solved Cron job not running properly

jbo@

Developer
I have a user called medusa with password *. I can successfully login to that user using su - medusa and executing a script in that user's home directory. The script creates a zip archive of some files and pushes it via rsync to another location.
As I'd like that script to be executed automatically once every hour I created a cron job for/with that user using crontab -e:
Code:
0 * * * * /home/medusa/scripts/myscript.sh > cron_log_myscript.txt
Then I logged out of the user and rebooted the machine.

I see in the cron log that the cron job is being executed. However, the script fails at the stage where it is supposed to create a ZIP archive. The script is terminated and returns my own error string:
Code:
# Create the archive
zip -rq $archivename ${files[*]}
if [[ $? -ne 0 ]]; then
        echo "Couldn't create ZIP archive"
        exit $?
fi

As mentioned the script runs fine if I manually execute it. Therefore, I suspect that there are some permissions problems.
Does a user require any specific permissions to run a cron job?
How can I debug this?

Thank you very much for your help.
 
Cron scripts get a sanitized environment (see crontab(5)) e.g. PATH is set to /bin:/usr/bin for users. Since your script runs fine outside of cron PATH is probably not set correctly.

I assume zip is in /usr/local/bin. Make sure to include that in PATH. You can do this by adding this to the top of medusa's crontab:
Code:
PATH=/usr/local/bin:/bin:/usr/bin
 
Thank you for your reply, tobik.

I tried what you suggested and the zip no longer fails. However, after the zipping operation I am using rsync to push that zip to a remote location. The zip never arrives there so the rsync operation doesn't appear to be working.
I am using rsync over SSH:
Code:
# Sync to backup server
rsync -ach --rsh "${rsync_connector}" "./$archivename"  "${remote_host}":"$remote_path/$name/"

# Remove backups that are more than 14 days old.
ssh "${remote_host}" find "$remote_path/$name/" -ctime "${backup_age}" -delete
I setup SSH keys so the script can run without any user interaction (no password prompt) and that part works as well when I manually run the script.

Is there some other "sanitized environment setting" that prevents the script from using the users SSH keys?
 
Is there some other "sanitized environment setting" that prevents the script from using the users SSH keys?
Maybe. To find out I would change the line in crontab to this to also capture output send to stderr and not only stdout (or read the mail of medusa if you haven't disabled sendmail on your system):
Code:
0 * * * * /home/medusa/scripts/myscript.sh 2>&1 | cat -n > cron_log_myscript.txt
or run your script like this interactively and see if it breaks too:
Code:
env -i PATH=/usr/local/bin:/bin:/usr/bin HOME=/home/medusa PWD=/home/medusa USER=medusa LOGNAME=medusa SHELL=/bin/sh /home/medusa/scripts/myscript.sh
This should be the environment that cron uses.
 
It is actually working well now. The modification proposed in your first post regarding the PATH is what fixed it.
I didn't see it working because another script was now able to try to push 25GB of backup to my poor 10GB server. Hence the script I was trying to get working with your help now mailed me that the remote is out of disk space.

Is manually setting that PATH in the cron job declaration just a temporary work-around or is that how it is supposed to be done?

I thank you very much for your help. I appreciate it a lot!
 
Last edited by a moderator:
After the recommendation I read on a similar post, one thing I do is to use an ABSOLUTE path for everything, for both commands and files passed as argument. That's a sure bet, and simple. No ./ anywhere.

Dominique.
 
Thank you for the recommendation about using absolute paths for all binaries.
Is it a good idea to have variables for these paths at the top of the script so that someone who wants to modify/port the script only has to change the paths stored in those variables?

Something like:
Code:
BIN_RSYNC=/usr/local/bin/rsync
And then using BIN_RSYNC everywhere in the script where rsync is called.
 
Yes, definitely. A nicer solution would be something like this:
Code:
BIN_RSYNC=`whereis -qb rsync`
But you'll need to check if it actually succeeded or not (in case rsync isn't installed for example).
 
Back
Top