UFS Rsync problems - Different size

Hello,

I have a problem with rsync. I set up FreeBSD 10.0 and wanted to use this for a file server in my house. I now rsynced two HHD's which are simply mirrored with rsync, to have a backup in case of data loss, or disk failure. I used this command: rsync -a -v --progress --delete /mnt/ /flserv.
After some hours of blinking HDD LED's, this is my output:
Code:
22:01:02 root@flserv:/mnt$ df
Filesystem  1K-blocks  Used  Avail Capacity  Mounted on
/dev/ada0a  5061628  486416  4170284  10%  /
devfs  1  1  0  100%  /dev
/dev/ada0d  2031132  164608  1704036  9%  /var
/dev/ada0e  1015324  8296  925804  1%  /tmp
/dev/ada0f  20307196  1579584  17103040  8%  /usr
/dev/ada0g  913579612 332856680 507636564  40%  /flserv
/dev/ada1s1 1419039310 331565694 973950472  25%  /mnt

Now tell me, why do ada0g and ada1s1 differ in size by about 1.290.986 kB?

I thought: "Okay, let's check what Windows says. After mounting both partitions to my Samba folder I get the exactly same data size.

Which should I trust now, Windows or FreeBSD? :cool:

I uploaded a picture. It's in German. If you don't understand it, just ask me.

Best regards,
WS.
 

Attachments

  • FreeBSD.jpg
    FreeBSD.jpg
    77.5 KB · Views: 341
I'm not sure. Maybe run the following command with --dry-run if you do not want to risk a missed path first, but I am accustomed to use #mnt# rsync -vaHX --delete-delay --partial --stats --numeric-ids --inplace --archive --compress --hard-links --one-file-system --bwlimit=2000 . /fileserv . Note the dot before the destination. There are several ways (starting deeper in the tree, destination deeper in the tree; the parameter above) to pre-test the command. But I've found it reliable year after year. And there's a chance it will more most closely match the total size. Please check it for typographical errors. I usually cannot reedit the post.
 
I don't think there is anything missing. If you take a look at the picture, you will see that everything is fine on Windows. The same number of directories, files and even the same size in bytes. Maybe there is no problem with rsync, since Windows shows everything correct to me. But I still do wonder, why does the size differ in FreeBSD? I have read somewhere, that different block sizes can cause these problems, but I can't find any information on the web on how to check the block sizes of both HDD's. Also, I don't know if this is even right.

I hope you guys have a little bit of patience with me, I'm not the pro user yet.

Best regards,
WS.
 
There may be a hidden file in the destination directory. Rsync use a random temporary file in the destination directory to receive the file. If you resume an rsync from a break in the middle of a file, the old tempoary file will be left over. you can check by ls -a in /flserv.
 
Hello Jov! Thank you for the hint, but no hidden files there.

The difference is also not that small. It's nearly 1.3 GB. I'm staring now for hours at this problem trying things out, but I can't understand where this can come from.
 
tunefs -p /mnt ... tunefs -p /flserv. Maybe compare those values. And I apologize for the flserv--fileserv typo above.
 
Well, some values differ. I really can't tell if there is something wrong. I'm glad, that you are trying to help me.

Take a look:

Code:
13:47:59 root@flserv:/mnt$ tunefs -p /mnt
tunefs: POSIX.1e ACLs: (-a)  disabled
tunefs: NFSv4 ACLs: (-N)  disabled
tunefs: MAC multilabel: (-l)  disabled
tunefs: soft updates: (-n)  disabled
tunefs: soft update journaling: (-j)  disabled
tunefs: gjournal: (-J)  disabled
tunefs: trim: (-t)  disabled
tunefs: maximum blocks per file in a cylinder group: (-e)  2048
tunefs: average file size: (-f)  16384
tunefs: average number of files in a directory: (-s)  64
tunefs: minimum percentage of free space: (-m)  8%
tunefs: space to hold for metadata blocks: (-k)  0
tunefs: optimization preference: (-o)  time
tunefs: volume label: (-L)

And:
Code:
13:47:37 root@flserv:/mnt$ tunefs -p /flserv
tunefs: POSIX.1e ACLs: (-a)  disabled
tunefs: NFSv4 ACLs: (-N)  disabled
tunefs: MAC multilabel: (-l)  disabled
tunefs: soft updates: (-n)  disabled
tunefs: soft update journaling: (-j)  disabled
tunefs: gjournal: (-J)  disabled
tunefs: trim: (-t)  disabled
tunefs: maximum blocks per file in a cylinder group: (-e)  4096
tunefs: average file size: (-f)  16384
tunefs: average number of files in a directory: (-s)  64
tunefs: minimum percentage of free space: (-m)  8%
tunefs: space to hold for metadata blocks: (-k)  6408
tunefs: optimization preference: (-o)  time
tunefs: volume label: (-L)
 
The last line of du /flserv:
Code:
332856680  /flserv
The last line of du /mnt :
Code:
331565694  /mnt

And df gives this output:
Code:
17:02:07 root@flserv:/mnt$ df
Filesystem  1K-blocks  Used  Avail Capacity  Mounted on
/dev/ada0a  5061628  486416  4170284  10%  /
devfs  1  1  0  100%  /dev
/dev/ada0d  2031132  163128  1705516  9%  /var
/dev/ada0e  1015324  8296  925804  1%  /tmp
/dev/ada0f  20307196  1579584  17103040  8%  /usr
/dev/ada0g  913579612 332856680 507636564  40%  /flserv
/dev/ada1s1 1419039310 331565694 973950472  25%  /mnt

So as we see, du gives the same size as df. What else should I check?

EDIT:

I tried it with a different parameter.

du -A /mnt gives: 330152940
du -A /flserv gives: 330152924

Well, at least those are quite close to each other :). Can someone please explain to me why this happens? I think I've learned now that disk usage != actual size. It may sound strange, but I study electrical engineering. We do program quite a lot in C, C++, VHDL and other hardware stuff, like AVR microcontrollers. I've never learned about these basic computer-related things. We do have later lectures on this topic. I hope you guys understand it and don't get mad at me, because I am asking such basic stuff.
 
Back
Top