Solved File too large

So I noticed, that my regular rsync offsite backup script was failing. I thought the problem is with rsync and i tried to scp the 550GB file over.

After 514GB scp failed with message "File too large"

So how to find out how big files does UFS2 file system support?

Both source and destination server are FreeBSD 9.3 64-bit. And the 550GB file exists in source server.

Source server file system with dumpfs (size 16T):
newfs -O 2 -U -a 2 -b 65536 -d 65536 -e 8192 -f 8192 -g 16384 -h 64 -i 131072 -k 0 -m 2 -o space -s 35156247472

Destination server file system with dumpfs (size 5.4T):
newfs -O 2 -U -a 32 -b 4096 -d 4096 -e 1024 -f 4096 -g 131072 -h 64 -i 65536 -j -k 123 -m 2 -o space -s 11721066240

Of course as a workaround I can make backup files smaller by splitting them to volumes...
 
With a file that big it may be best to split it into pieces. Of course that adds complication if you need to restore. I'm also not sure how scp handles big files. I know rsync will split to file into chunks and only send changed sections. Of course depending on how the file is created, large sections of the file could change every time. Even so, copying a file that big is a bit of a PITA.

It's strange to get a file size error though. I wouldn't of thought anything in FreeBSD would really care about the file size unless it actually hits file system limits. As a matter of interest, what's the backup method/file format and what error is rsync giving? How often are you doing this backup?

Personally, the idea of a 550GB backup file scares me. This doesn't solve your problem but I scrapped using UFS and traditional backup methods (dump/rsync/etc) years ago. All my mass storage (>1TB) is on ZFS pools using snapshots + send/recv for backup. It's a nice feeling to have several TB of data sync each night within minutes, and have direct access to every file, and every previous version, with having to worry about trying to extract monolithic archives.
 
ulimit shows: "unlimited"

I am rsyncing rar-ed files, but that does not really matter. What matters is why I cannot create bigger files:

Code:
# dd if=/dev/zero of=myfile.dat bs=1048576 count=800000
dd: myfile.dat: File too large
525315+0 records in
525314+0 records out
550831652864 bytes transferred in 9872.396648 secs (55795130 bytes/sec)


ZFS feels like black magic to me, so I am not using that. UFS + gmirror/gstripe/gconcat has so far served me well.

Please note that FreeBSD 9.3 has been EoL since December 2016 and is not supported any more.

I apologize, but I was hoping to find someone with similar problem. Quick search did not give good results.

Has UFS2 file system changed a lot moving from FreeBSD 9.3 to 11.1?
 
UFS has hardly changed. Also, there really isn't much black magic involved with ZFS. These days it's one of the most robust filesystems out there. Not to mention redundant because you won't risk scenario's where you have an overhead of free space in one slice but a little shortage in the other.

Do note though that I'm not favoring either filesystem over the other. Both have their uses and both have their place.
 
Destination server file system with dumpfs (size 5.4T):
newfs -O 2 -U -a 32 [COLOR=rgb(226, 80, 65)]-b 4096[/COLOR] -d 4096 -e 1024 [COLOR=rgb(226, 80, 65)]-f 4096[/COLOR] -g 131072 -h 64 -i 65536 -j -k 123 -m 2 -o space -s 11721066240

This filesystem might not work properly.
Those parameters are more or less uncommon, and having blocksize != fragsize*8 tends to give strange problems.

I remember having experimented with these things, many years ago, because according to newfs() one may twist and tune all of these options - but the outcome can be a filesystem that does not behave properly (quite in scope of what the TO experiences). I do not recall the exact effects, but my conclusion was: just stick with the defaults.

Concerning ZFS: it is black magic. On UFS we have many individual filesystems with a known structure that can be accessed directly by seeking to the proper position (it might even be possible to repair manually). With ZFS we have one big object spanning multiple disks, almost impossible to handle from outside, and hardly know what's inside. There is a "debugger", but it takes two days to startup. Yes, that is black magic.
Nevertheless, in all the years I never had any problem with it (but still, {/ /usr /var} stay on UFS here).
 
So yes, the problem was a file system. I created a new file system with parameters:

newfs -U -b 65536 -e 8192 -f 8192 -i 131072 -k 0 -m 2 -o space /dev/stripe/liitp1

dumpfs -m did display:

newfs -O 2 -U -a 2 -b 65536 -d 65536 -e 8192 -f 8192 -g 16384 -h 64 -i 131072 -k 0 -m 2 -o space -s 11721066240 /dev/stripe/liitp1

And now I had no problem creating 900GB test file.
 
Back
Top