Same disk models different space

I have 2 disks (same model) and want to create RAID-1 with gmirror.

Using smartctl both disks show same capacity:

Code:
Model Family:     Seagate Enterprise Capacity 3.5 HDD
Device Model:     ST4000NM0024-1HT178
Serial Number:    Z4F0F778
LU WWN Device Id: 5 000c50 08777e0d7
Firmware Version: SN02
User Capacity:    4,000,787,030,016 bytes [4.00 TB]

Model Family:     Seagate Enterprise Capacity 3.5 HDD
Device Model:     ST4000NM0024-1HT178
Serial Number:    Z4F05Q1H
LU WWN Device Id: 5 000c50 07b59ed24
Firmware Version: SN02
User Capacity:    4,000,787,030,016 bytes [4.00 TB]

The problem is that gpart show shows a little difference:

Code:
=>        40  7814037088  ada0  GPT  (3.6T)
          40          88        - free -  (44K)
         128         128     1  freebsd-boot  (64K)
         256     8388608     2  freebsd-ufs  (4.0G)
     8388864   134217728     3  freebsd-swap  (64G)
   142606592    33554432     4  freebsd-ufs  (16G)
   176161024   134217728     5  freebsd-ufs  (64G)
   310378752    33554432     6  freebsd-ufs  (16G)
   343933184  7470103944     7  freebsd-ufs  (3.5T)

=>        34  7814037101  ada1  GPT  (3.6T)
          34          94        - free -  (47K)
         128         128     1  freebsd-boot  (64K)
         256     8388608     2  freebsd-ufs  (4.0G)
     8388864   134217728     3  freebsd-swap  (64G)
   142606592    33554432     4  freebsd-ufs  (16G)
   176161024   134217728     5  freebsd-ufs  (64G)
   310378752    33554432     6  freebsd-ufs  (16G)
   343933184  7470103944     7  freebsd-ufs  (3.5T)
  7814037128           7        - free -  (3.5K)

Why the 2nd disk shows 3.5kb free space at the end?
 
Because disks are never exactly the same size. It has to do with the way they are manufactured and tested.
 
If one disk fails and the new disk for replacement is few kb smaller than the other good disk how I can add the new disk to gmirror?

"gpart backup ada0 | gpart restore -F ada1" will fail as the new disk will be smaller.
 
I think the only option is for new servers to keep some space unallocated at the end.

And for existing servers if it ever happens to remove the /home2 (used for backups) , create /home2 again with some space at the end. And then create the gmirror.
 
Because disks are never exactly the same size. It has to do with the way they are manufactured and tested.
Nope. All disks of the same model and firmware revision leave the factory at the exact same size. Firmware changes or OEM customizations sometimes change the amount of space compared to the reference implementation, but should be consistent across all drives with those specific features.

Drives from different model families or from different manufacturers sometimes had different numbers of sectors for a given "nameplate capacity", but since most OEMs purchase from multiple suppliers (particularly since the Thailand drive shortage some years ago) for diversity but still want the drives to be interchangeable, even this is becoming more standardized, though not 100%.

Even when disks required user involvement to map out bad sectors, the total size of the disk was constant - the only thing that changed were the number of blocks pre-allocated to the "bad block file". This was a hold-over from Berkeley 4.1 Unix (see the "bad144" manpage), which in turn got it from DEC (DEC STD 144). This was done away with in FreeBSD as being obsolete in FreeBSD 3.0

If the user-visible capacity of 2 "identical" drives varies, it is almost certainly one of two things:
1) Someone used the Set Maximum Address or Host Protected Area commands on the drive
2) Some disk controller (usually RAID) or OS level function has reserved capacity for labels or other metadata

In the OP's example, the first clue is that the GPT "start sector" is 34 on one drive and 40 on the other. This would probably indicate that they were labeled / partitioned under different versions of the operating system (or different operating systems) with different ideas about how much space should be reserved for boot blocks, etc.
 
In the OP's example, the first clue is that the GPT "start sector" is 34 on one drive and 40 on the other. This would probably indicate that they were labeled / partitioned under different versions of the operating system (or different operating systems) with different ideas about how much space should be reserved for boot blocks, etc.

To avoid such variations it is a good idea to create the GPT table on one drive, write a backup file of the GPT header and restore it to the other drive:
Code:
sgdisk -b <file> <device>
sgdisk -l <file> <device2>
 
When creating the GPT partitioning scheme with GNU Parted or Windows, the GPT "start sector" is 34. With FreeBSD 11.1 gpart the "start sector" is 40.

But in the CyberCr33p gpart show listing all the partition start sectors and sizes perfectly match with both drives so no problems there.
 
Firmware changes or OEM customizations sometimes change the amount of space compared to the reference implementation, but should be consistent across all drives with those specific features.
Explains well why sometimes OEM drives have slightly different sizes than the official retail versions.

To avoid such variations it is a good idea to create the GPT table on one drive, write a backup file of the GPT header and restore it to the other drive:
Code:
sgdisk -b <file> <device>
sgdisk -l <file> <device2>
There is one catch, however.
If you have slightly differently sized drives, you should copy the stuff from the smaller one to the larger one. Reason is that the GPT table is stored twice on the drive, at the beginning and the end.
 
In the OP's example, the first clue is that the GPT "start sector" is 34 on one drive and 40 on the other. This would probably indicate that they were labeled / partitioned under different versions of the operating system (or different operating systems) with different ideas about how much space should be reserved for boot blocks, etc.

I am not sure as this is an old server and initial installation made many years ago. But I think the gpart on first disk was made with mfsbsd 10. Then I use mfsbsd to write my custom freebsd 10 image on the server. Then after some years I upgrade FreeBSD to 11.

When lately the 2nd disk failed I use "gpart backup ada0 | gpart restore -F ada1" which failed with: "gpart: size '2489354127': Invalid argument"

Are any changes related to this between freebsd 10 and 11?

Is it possible even manually to create the same partitions? I think it's not possible as gpart -b 34 is not a valid option any more.
 
Back
Top