Other Why is stdout locked against read?

When I do this
cmd > outfile &
the outfile should slowly grow along with the produced output. And with ls -l I can observe that growth.
However, when I now do
cat outfile
in order to read the produced output, that cat will block with 0 bytes output, until cmd has terminated, and only then will produce the entire output.
I am quite sure that behaviour was not the case from the beginning, and I would very much like to get rid of it.

Specifically my usecase was that I wanted to watch a movie, and my movie collection is on NFS. When I'm on travel, that NFS gets routed via VPN, and -depending on location- it can become quite slow, too slow to get a synchronized playback even when the throughput itself might still cope with playback speed.
So I thought I could copy the movie to local storage, and watch whlle copying. Didn't work because of the beforementioned: while cp copies the file, the outfile is locked against read. I thought the problem is somehow with NFS, and so, instead of doing cp /remoteshare/file /local/file, I tried to do cat /remoteshare/file > /local/file which should be independent of what gets fed to that stdout. But that also doesn't work: /local/file is locked against read until not written anymore.
°°°°°°°°°°°°°°°°°°°

Finally, doing the Randal Schwartz classic (useless use of cat) seems to solve the issue:
cat /remoteshare/file | cat > /local/file
So this was quite certainly not here from the beginning, because then it would have been mentioned back there as a useful use of cat.
 
I'm not quite sure that I understood this correctly, but when I want to monitor file as it grows, I'll use tail -f, not cat
 
Reaching the end-of-file on an ordinary file (read(2) returns a zero byte count) causes cat to terminate. This is true regardless of whether any process is also ("occasionally") writing to the file.

The rules change if the standard input is a pipe, which has a slightly different code path for the read(2). If a writer has the pipe open, the reader will stall, rather than return zero.

I believe that tail -f was invented to deal with the situation you describe, as vmisev has already suggested.

Edit: to clarify: when there is a pipe involved, the reader will only see an end-of-file (read zero bytes) after the writer exits. The writer will get SIGPIPE if the reader exits.
 
you should not rely on the filesystem size indicator of an NFS file to be correct when a file is open for writing. You are running into a block buffering issue. The filesystem information isn't updated until a complete disk block has been buffered and written. To minimize nework IO the NFS system will use "buffered IO".
 
I tried a simple test (read a line from nfs file, write to localfile, sleep for a second, repeat) and catting the localfile worked fine. Something other than locking must be going on. You can try ktrace -di on cat to see what syscalls it is making, to debug this.... Or may be your shell does something funky? (redirection is done by the shell).

It may be better to use rsync and link the local temporary filename to a different filename & watch under that name (since the temp name will disappear once the xfer is over).
 
Back
Top