perl 5.8.8; ssh timeout in thread

Hi,

I'm trying to make a perl script which does a task on a remote server. List of the servers is long enough to not go sequentially but rather split it in threads. So I've created several threads each doing the same task. It's working as expected, so far so good. Logic of the script is as following:

Code:
# prep-work, manage list, then divide list among workers (threads)
# ESTIMATED_WORKERS == calculated amount of threads 
for (0 .. $ESTIMATED_WORKERS) {
..
    push @workers, threads->new(\&worker_remote_cmd, @args); 
..
}

# wait for workers to finish
foreach (@workers) {
        push @total_failed , @{$_->join()};
}

sub worker_remote_cmd {
   ...
   # actual ssh command, let's say it's going to be scp
   `scp -i ~/.ssh/my_key /local/file $SERVER:/remote/location > /dev/null 2>&1`
   ...
   return \<ARRAY_OF_FAILED_HOSTS>
}

So everything is ok as long as the scp command doesn't hang, for example due to missing authorized key. The goal is to set a timeout for the scp command (or any command in the thread for that matter). I did Google around and found an alarm solution - though some further readings show that mixing signals and threads is not a good idea.

The server (where the script is launched from) is using perl version 5.8.8 and I can't do the upgrade nor install additional modules from CPAN. There are no Net:SSH2 nor Net::SSH::Perl modules installed on the server.

When the script is executed manually, the user can always ^C from it. The problem is when the script is scheduled to run automatically/periodically.

If you have something in mind, please share - I'm stuck and nothing wise comes into mind right now.
 
You can use open3 from IPC::Open3:
Code:
my $chld_pid = open3(\*CMD_IN, \*CMD_OUT, \*CMD_ERR, "scp -i ...." );
Then you can use eof(CMD_OUT) or maybe waitpid($chld_pid, WNOHANG).. Anyway waitpid may not work:/

So you can try this way...

Or maybe better you can get Net:SSH2 and set it up in your own directory, linking with use lib. Whatever, perl threads create more problems than they solve, because it's not real threads.
 
Thank you for the hint. Hopefully I'll have some time this weekend to test it. I knew I can use IPC::Open, didn't know about IPC::Open3.

Well, I know it's not ideal to use perl threads. But the question is then: how to create a script that does the job in parallel with some level of error control ? I could write it in C, but this seems to be an overkill for the stuff I want to achieve.
 
Just use fork then, control flow with IPC. fork is better because threads create makes full deep clone for every data, and your :shared variables are actually perlmagic(tied). So you will lose more memory with threads and not getting benefit with variable sharing.

On the other side, the fork syscall will use make copy-on-write of your process, so it won't lose so much memory, won't lose other processes if scp will hang or whatever.
 
Back
Top