Other Slow boot and GPT corrupt

Hi _martin

here's the result
Code:
# diskinfo /dev/ada0
/dev/ada0    512    500107862016    976773168    4096    0    969021    16    63

# zdb -l /dev/ada0p3
------------------------------------
LABEL 0
------------------------------------
    version: 5000
    name: 'zroot'
    state: 0
    txg: 138884
    pool_guid: 13265714063653844692
    errata: 0
    hostname: 'beethoven'
    top_guid: 12413823846737055314
    guid: 12413823846737055314
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 12413823846737055314
        path: '/dev/ada0p3'
        phys_path: 'id1,enc@n3061686369656d30/type@0/slot@1/elmdesc@Slot_00/p3'
        whole_disk: 1
        metaslab_array: 64
        metaslab_shift: 32
        ashift: 12
        asize: 497954586624
        is_log: 0
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 0 1 2 3

I remember a post with the same GPT problem and I had to do some commands and calculations to zero the last bytes.
It was after the FreeBSD of course or I wouldn't find this problem at all.
I think I got it from here the problem was somehow similar and the explanation seemed to make sense to me, dd if=/dev/zero ...
 
Last edited by a moderator:
Ok, my estimate of your disk size was spot on and the dd commands I shared above are hitting the end properly. ZFS size is within limit of the partition, also OK.
Zeroing down the end of the disk _before installation was a "zicher" move to remove stale GPT metadata. gpart destroy -F would do that for you. But I did this sometimes myself too.

All these steps are done before installation (i.e. before gpart create) and has no impact on the running system. If it was my disk I'd boot the system, 0 down the end of the disk to wipe out those last 40 sectors and let gpart create it again. But I leave that option as a plan B here.

Please confirm you are not using any gmirror raid and not putting that disk to some sort of gmirror raid (that's important). Also I suggest you to boot the system, do the recovery, make sure gpart shows it's all ok and then try to boot the recovery usb/cd and do the gpart/dd check again. With this we can see if your system is doing something to that disk during boot.

Back to the hdd itself: could there be a problem with it? It can, it sure is used a lot. But the issue you're experiencing is not convincing enough for me to say that it's a disk failure 100%.
 
Hi,
as far as I know I never configured such gmirror in this system.
During setup a couple of weeks ago there was a "mirror" option that I selected not to have. Just selected the ZFS partitioning and that's it.
The system seems working just fine but I'm just curious about this and want to fix it as well.
It's an old HDD, lets say about 5 years old, not that much old, lived most of it's life in a laptop with M$ OS and never reported any HDD problem before.
Once again the BIOS full HDD test didn't detect any error and the only error I've found is when using FreeBSD.
I admit that maybe I didn't prepare the HDD properly for the ZFS partition, I never thought it was necessary and the setup never asked.
So in my ignorance, in a good way, the hardware was ready to receive FreeBSD OS.
The story is that I wanted to cleanup M$ OS from this laptop and decided to install Linux, but instead I gave FreeBSD a try.
Formated the HDD and used a USB pendrive to boot the FreeBSD setup just like I did a million of times with Linux.
 
Whatever you did before the installation doesn't matter. It's what's happening after that's important. If you have vanilla FreeBSD installation and you let installer do all there should be no problem. Is this disk the only disk in your setup (notebook)?
I'd try the dd and boot to rescue system as I mentioned above. Then, making sure you don't have anything important on that disk and do what I called plan B above (0 down the end of the disk and let gpart recreate it again):

Make backup of whatever you are going to 0 down; save this file outside of this disk too:
Code:
dd if=/dev/ada0 of=ada0-end-backup.blob bs=512 skip=976773128

Reference point: check two blocks before GPT header (including 2 from header) to see what's there.
Code:
dd if=/dev/ada0 bs=512 skip=976773126 count=4 | (sleep 0.1 && hd )

*warning*: you could destroy your data if not done properly (notrunc is not needed on disks but I think it's just good habit to do it always to remember using it when writing to files):
Code:
dd if=/dev/zero of=/dev/ada0 bs=512 seek=976773128 conv=notrunc
This should write 40 chunks (40*512 bytes) exactly.

This command then should return all 0s:
Code:
dd if=/dev/ada0 bs=512 skip=976773128 | (sleep 0.1|hd)

gpart will start reporting issues as it's missing backup header. do the gpart restore and share the dd output as you did before.
 
Well, I don't have anything important in this disk, it's just some hours setup trying to make all hardware work as it's supposed to and to have the nice XFCE desktop look that I love to work with.
Losing this is the same as losing hours of my life, but lets consider this a hobby so it's kind of entertaining too.
I use to say that if you lost some code and have to rewrite it in the second try the code gets better.
So "this is the way", I'm whipping out all disk data and reinstall everything.
Anyway, I'm not sure if I'll give ZFS another try, I didn't like it at all, moving a file takes too long, seem it's really moving all data in the same partition to another path.
It took me from 5 to 10 minutes to move 15GB in the same ZFS partition. Insane.
I don't want this, moving a file in the same partition is an instant operation that happens while you press the "enter" key and it's completed before you have time to lift the finger from the key.
In a house when the street doors are renumbered and your door number changes you don't need to get your stuff outside and put it back inside, right?
The risk? You may damage some furniture and some may be missing in the end. Now imagine that with important data.
I figured out a while ago how hard is to use something as basic as a bluetooth headset and so with all this I'm having second thoughts on using FreeBSD at all.
I couldn't make it work at all, something that should be as easy as just pair it and it's done.
This "wireless" protocol came to our lives to make it easier and not harder.
So what's the point of all this?

For now I'm cleaning up all HDD then I'll look at the pendrive with FreeBSD in one hand and another pendrive with Linux in the other hand.
In case I come back in this forum, just in case I get the FreeBSD pendrive, I will let you know if with the second setup the problem was gone.
Or else... it means I got the other hand instead.

Thank you guys and I'm sorry for making you losing your precious time with me, I mean, my system problems.
 
As I mentioned that slowness still could be due to the disk issues. But for notebook disk, 15GB at 5mins on a same partition is ~51MBps. While not awesome not that bad either. I doubt you get better results on, let's say ext4 partition. Or UFS one if we stick to FreeBSD.
Note "ZFS partition" is only place where ZFS lives. Datasets are used then to organize FS into logical units. If you are moving data across datasets you are copying the data.

While analogy of house is nice it really doesn't work that way with FS. Do you know that any (really any) write that is happening on FS goes through cache? (i.e. RAM). Imagine people are using non-ECC ram and trusting their data to it. Even more some they are willing to use FS that doesn't even bother to checksum the data it wrote.

Even better if there are no important data on it. Would you be willing to do those tests I mentioned ? If anything for others who might come after you and google this.
 
As I mentioned that slowness still could be due to the disk issues. But for notebook disk, 15GB at 5mins on a same partition is ~51MBps. While not awesome not that bad either. I doubt you get better results on, let's say ext4 partition. Or UFS one if we stick to FreeBSD.
Note "ZFS partition" is only place where ZFS lives. Datasets are used then to organize FS into logical units. If you are moving data across datasets you are copying the data.

While analogy of house is nice it really doesn't work that way with FS. Do you know that any (really any) write that is happening on FS goes through cache? (i.e. RAM). Imagine people are using non-ECC ram and trusting their data to it. Even more some they are willing to use FS that doesn't even bother to checksum the data it wrote.

Even better if there are no important data on it. Would you be willing to do those tests I mentioned ? If anything for others who might come after you and google this.
Well, when moving (not copying) in the same partition it does not really move anything at all.
It just changes the address but the file data bits keeps exactly in the same place in the disk partition, but if you move to other partition all file gets moved from one partition to other.
With ZFS it took too long to move some files in the same partition, seems it's copying it to the new location and later delete the original?
This means I will not ever be able to move a 200GB file if I only have 10GB free in the partition, which is bad if it does it this way.

I didn't clean up the system.... yet. I still have some more things I want to try before I do as to use a bluetooth sound device.
By the way, I was able to make my bluetooth headsets work, with some line in the command line.
It's kinda ugly but it worked, had to workaround to make it work inside firefox (it uses pulseaudio) but got sound delay.
 
Well, when moving (not copying) in the same partition it does not really move anything at all.
I'm not an FS developer and hence not the best one to talk about this but you need to think differently about partitions when you think about ZFS (and even btrfs if we want to compare alike FSs). There's no such thing as partition when talking about ZFS itself - there are pools ale datasets. You are not moving only addresses if you are copying data over to different dataset (it's completely different namespace/address space if you like). And of course ZFS is not necessarily on partition; e.g. all my data pools are on raw disks.
Small demonstration:
Code:
# zfs create -o mountpoint=/f1 rpool/f1
# zfs create -o mountpoint=/f2 rpool/f2
# bdf /f1 /f2
Filesystem 1M-blocks Used Avail Capacity  Mounted on
rpool/f1       57464    0 57464     0%    /f1
rpool/f2       57464    0 57464     0%    /f2
# touch /f1/newfile /f2/newfile
# ls -lai /f1/newfile /f2/newfile
2 -rw-r--r--  1 root  wheel  0 Oct 29 18:12 /f1/newfile
2 -rw-r--r--  1 root  wheel  0 Oct 29 18:12 /f2/newfile
#
As you can see different datasets, different files, same inode.

But then, it's ok if you don't like ZFS, your data, your choice. If you're up to detecting the issue with the disk you can always continue here. Disk is worn and may cause issues, that was said though already.
 
I'm not an FS developer and hence not the best one to talk about this but you need to think differently about partitions when you think about ZFS (and even btrfs if we want to compare alike FSs). There's no such thing as partition when talking about ZFS itself - there are pools ale datasets. You are not moving only addresses if you are copying data over to different dataset (it's completely different namespace/address space if you like). And of course ZFS is not necessarily on partition; e.g. all my data pools are on raw disks.
Small demonstration:
Code:
# zfs create -o mountpoint=/f1 rpool/f1
# zfs create -o mountpoint=/f2 rpool/f2
# bdf /f1 /f2
Filesystem 1M-blocks Used Avail Capacity  Mounted on
rpool/f1       57464    0 57464     0%    /f1
rpool/f2       57464    0 57464     0%    /f2
# touch /f1/newfile /f2/newfile
# ls -lai /f1/newfile /f2/newfile
2 -rw-r--r--  1 root  wheel  0 Oct 29 18:12 /f1/newfile
2 -rw-r--r--  1 root  wheel  0 Oct 29 18:12 /f2/newfile
#
As you can see different datasets, different files, same inode.

But then, it's ok if you don't like ZFS, your data, your choice. If you're up to detecting the issue with the disk you can always continue here. Disk is worn and may cause issues, that was said though already.
But still if you move 200GB file in the same mount point, does it make sense to move all data around?
I mean what if "newfile" is a 200GB file and it's in /f1/path1
/f1/path1/newfile (200GB)
and move it to
/f1/path2
and only have 1GB free on /f1 ?
What would happen?
 
Last edited by a moderator:
I don't have enough information to answer that. You'd need to provide zfs list so we can see the layout of datasets. But if /f1/path1 is different dataset than /f1/path2 then what I mentioned above applies.
 
Got it!
So this is the layout
Code:
NAME                                        USED  AVAIL     REFER  MOUNTPOINT
zroot                                      42.9G   403G       96K  /zroot
zroot/ROOT                                 17.7G   403G       96K  none
zroot/ROOT/13.1-RELEASE_2022-09-29_135322     8K   403G     1.14G  /
zroot/ROOT/default                         17.7G   403G     17.6G  /
zroot/tmp                                   400K   403G      400K  /tmp
zroot/usr                                  25.2G   403G       96K  /usr
zroot/usr/home                             24.1G   403G     24.1G  /usr/home
zroot/usr/ports                            1.12G   403G     1.12G  /usr/ports
zroot/usr/src                                96K   403G       96K  /usr/src
zroot/var                                  1.73M   403G       96K  /var
zroot/var/audit                              96K   403G       96K  /var/audit
zroot/var/crash                              96K   403G       96K  /var/crash
zroot/var/log                              1.21M   403G     1.21M  /var/log
zroot/var/mail                              140K   403G      140K  /var/mail
zroot/var/tmp                                96K   403G       96K  /var/tmp

I moved huge files from my /home/user path to /mnt/, meaning I moved from zroot/usr/home to zroot/ROOT/default.
It's a different dataset and it explains why it took so long moving the files.
I did another test and moved from my /home/user to /home/other, I mean to move to same dataset.
Voilá, instant operation just as I expected.

I thought I was moving in the same dataset/partition, I was... wrong.
Thank you for this clarification.
 
Hi,
I'm back.
Got a another disk and guess what, same issue.
This time I deleted all partitions and let the setup configure everything.
In the end I got this:
Code:
$ gpart show
=>       40  976773088  ada0  GPT  (466G) [CORRUPT]
         40     532480     1  efi  (260M)
     532520       1024     2  freebsd-boot  (512K)
     533544        984        - free -  (492K)
     534528    4194304     3  freebsd-swap  (2.0G)
    4728832  972044288     4  freebsd-zfs  (464G)
  976773120          8        - free -  (4.0K)
I think I will just ignore this.
 
I had to reread the whole thread as I didn't remember what we were doing here.:) Was this installation done completely automatically or did you touch the disk layout/partition sizes at all ?
Can you paste the output of the diskinfo ada0 too ? I'm assuming the disk size is 976773168.

For us to compare why there's corruption I'd need the dd output of the start and the end of the disk as before. Size of the disk has to be adjusted if the disk is different in size though.
 
Yes it was a complete clean disk with no single partition.
It was the setup that created all the layout automatically.
I just can't accept both disks are dying.

diskinfo ada0
ada0 512 500107862016 976773168 0 0 969021 16 63
 
Can you tell us more about the HW itself? Is it notebook or a workstation? Especially in the latter case did you change the SATA table too?
I was correct about the disk size. Please can you share the output of these again?
Code:
dd if=/dev/ada0 of=ada0_ok_start bs=512 count=40
dd if=/dev/ada0 of=ada0_ok_end bs=512 skip=976773128
Disk is the exact same size so commands are also the same.
 
It's a notebook and using UEFI boot.

Here's the content of the files.
 

Attachments

  • ada0_ok.zip
    1.3 KB · Views: 54
Don't paste files, paste the content.
Absolutely not. He did good, we need raw data of the disk and not dance around text copy pasting. Also this avoids any copy-paste errors user may introduce with copy-pasting. I trust the dump, not user. Also I asked him to do so as he did before.

rmomota This is one interesting issue you're having. But thanks to this I understand where those "double entries" are coming from form your first posts.
My findings:
- partition entries are OK and are the same in both start and the end
- the disk you shared is actually ok, gpart should not be reporting this as corrupted. *Unless I made a mistake I think this is a gpart bug.

I'll refer to my disk as md0 but it's essentially the same disk as your ada0.
Code:
# gpart show md0
=>       40  976773088  md0  GPT  (466G)
         40     532480    1  efi  (260M)
     532520       1024    2  freebsd-boot  (512K)
     533544        984       - free -  (492K)
     534528    4194304    3  freebsd-swap  (2.0G)
    4728832  972044288    4  freebsd-zfs  (464G)
  976773120          8       - free -  (4.0K)

# diskinfo md0
md0    512    500107862016    976773168    0    0
#

Data for md0 can be written into LBA range 40 - 976773088, ending on 976773128. This corresponds to the free space at the end of the "usable space".

Diskinfo says size 976773168, so GPT backup metadata with 512B sector should start at 976773128 which they do originally.
But gpt is messing this up, like so:
Code:
; original
00004e40  b7 31 5c 26 0a 08 9f c0  08 60 38 3a 00 00 00 00  |.1\&.....`8:....|       ; LBA of part entries 0x3A386008 ( 976773128 )

; recovered
00004e40  b7 31 5c 26 0a 08 9f c0  0f 60 38 3a 00 00 00 00  |.1\&.....`8:....|       ; LBA of part entries: 0x3A38600F ( 976773135 )

*I'd need to go through the UEFI standard to be able to say if gpart is doing what it should.

Now OK, this is a problem and it is a SW bug somewhere. But this doesn't explain why would your system take long to boot. Did it boot slowly even when you 0-ed down the end of the disk ? dd if=/dev/zero of=/dev/md0 bs=512 seek=976773128 conv=notrunc will delete all LBAs starting from 976773128.

edit: I don't have time now to do further checks because of RL (reallife) but out of curiosity I did create that disk on Linux too. It created the backup header with the partition entries @ 0x3A38600F so gpart is doing it properly. I'm personally curious how then it has space for 40LBAs to keep the full table (thinking out loud maybe standard says it doesn't need to keep all of them but reference has to be updated).
But then - is this how installer set your disk ? I'd be curious to see this done again on fresh install (with both start and end 0ed down). Once installer is finished drop to shell and test where those gpt entries are. If they are ok then I'd be curious to see who is changing that. Your BIOS ? I've never seen that before ..

edit2: I did install FreeBSD in VM with the exact same disk size as you have. Used the default ZFS settings. I don't see any issues, partition table in the backup GPT header starts at 0x3A38600F. Did you do something else to that disk you are maybe not telling? If not I wonder if that BIOS is doing something that would mess that header. As I've mentioned above - the best way would be to do a fresh install (make sure start and end of the disk is 0ed down), check that header after installation but before reboot and then after reboot when system is up.

Also to answer even my own question whether gpart is doing it ok (yes, it does):
I went through the UEFI docs, the important part:
GPT Partition Entry Array that is stored in the GPT Header in Partition Entry Array CRC32 field. The size of the GPT Partition Entry Array is Size Of Partition Entry multiplied by Number Of Partition Entries. If the size of the GUID Partition Entry Array is not an even multiple of the logical block size, then any space left over in the last logical block is Reserved and not covered by the Partition Entry Array CRC32 field
I got confused by that 40 in gpart output as start of the usable LBA and thought metadata would end at lba_diskize-40. Interestingly enough that wrong metadata you have have the same mistake.
Confirmed by the dump you provided GPT header requires 32 sectors to store its metadata: (256*256)/512 or partition entry size * entries / sector size. And following the quote above 976773168-32-1(we are addressing from 0) = 976773135 (0x3A38600F).
 
Well, I did not 0 all or the end part of the disk this time.
It did not improve the boot performance by doing that anyway and the CORRUPTion persisted as well.
Despite all this happening the system seems OK it's just a matter of that annoying check to go away and to have that feeling that everything is alright as it should.
Regarding the boot time I believe it may me somehow related to the BIOS doing something to the disk itself.
How can I explain what's going on when I run gpart recover to fix it and it remains alright until next boot?


Absolutely not. He did good, we need raw data of the disk and not dance around text copy pasting. Also this avoids any copy-paste errors user may introduce with copy-pasting. I trust the dump, not user. Also I asked him to do so as he did before.

rmomota This is one interesting issue you're having. But thanks to this I understand where those "double entries" are coming from form your first posts.
My findings:
- partition entries are OK and are the same in both start and the end
- the disk you shared is actually ok, gpart should not be reporting this as corrupted. *Unless I made a mistake I think this is a gpart bug.

I'll refer to my disk as md0 but it's essentially the same disk as your ada0.
Code:
# gpart show md0
=>       40  976773088  md0  GPT  (466G)
         40     532480    1  efi  (260M)
     532520       1024    2  freebsd-boot  (512K)
     533544        984       - free -  (492K)
     534528    4194304    3  freebsd-swap  (2.0G)
    4728832  972044288    4  freebsd-zfs  (464G)
  976773120          8       - free -  (4.0K)

# diskinfo md0
md0    512    500107862016    976773168    0    0
#

Data for md0 can be written into LBA range 40 - 976773088, ending on 976773128. This corresponds to the free space at the end of the "usable space".

Diskinfo says size 976773168, so GPT backup metadata with 512B sector should start at 976773128 which they do originally.
But gpt is messing this up, like so:
Code:
; original
00004e40  b7 31 5c 26 0a 08 9f c0  08 60 38 3a 00 00 00 00  |.1\&.....`8:....|       ; LBA of part entries 0x3A386008 ( 976773128 )

; recovered
00004e40  b7 31 5c 26 0a 08 9f c0  0f 60 38 3a 00 00 00 00  |.1\&.....`8:....|       ; LBA of part entries: 0x3A38600F ( 976773135 )

*I'd need to go through the UEFI standard to be able to say if gpart is doing what it should.

Now OK, this is a problem and it is a SW bug somewhere. But this doesn't explain why would your system take long to boot. Did it boot slowly even when you 0-ed down the end of the disk ? dd if=/dev/zero of=/dev/md0 bs=512 seek=976773128 conv=notrunc will delete all LBAs starting from 976773128.

edit: I don't have time now to do further checks because of RL (reallife) but out of curiosity I did create that disk on Linux too. It created the backup header with the partition entries @ 0x3A38600F so gpart is doing it properly. I'm personally curious how then it has space for 40LBAs to keep the full table (thinking out loud maybe standard says it doesn't need to keep all of them but reference has to be updated).
But then - is this how installer set your disk ? I'd be curious to see this done again on fresh install (with both start and end 0ed down). Once installer is finished drop to shell and test where those gpt entries are. If they are ok then I'd be curious to see who is changing that. Your BIOS ? I've never seen that before ..

edit2: I did install FreeBSD in VM with the exact same disk size as you have. Used the default ZFS settings. I don't see any issues, partition table in the backup GPT header starts at 0x3A38600F. Did you do something else to that disk you are maybe not telling? If not I wonder if that BIOS is doing something that would mess that header. As I've mentioned above - the best way would be to do a fresh install (make sure start and end of the disk is 0ed down), check that header after installation but before reboot and then after reboot when system is up.

Also to answer even my own question whether gpart is doing it ok (yes, it does):
I went through the UEFI docs, the important part:

I got confused by that 40 in gpart output as start of the usable LBA and thought metadata would end at lba_diskize-40. Interestingly enough that wrong metadata you have have the same mistake.
Confirmed by the dump you provided GPT header requires 32 sectors to store its metadata: (256*256)/512 or partition entry size * entries / sector size. And following the quote above 976773168-32-1(we are addressing from 0) = 976773135 (0x3A38600F).
 
It did not improve the boot performance by doing that anyway and the CORRUPTion persisted as well.
That means something is actively screwing that metadata. As it's fresh install we can safely assume it's not FreeBSD. Again, that test I mentioned few times already would be helpful - install FreeBSD, drop to shell, verify GPT is ok, reboot, check again.
I asked this question because my theory is: delay is probably not because there's HW problem (disk, cable, or even controller depending on your setup) but rather something is having problem with GPT and is trying to "fix" it. And that's where that corruption is coming from too. There's not much "something" in the boot path.

Regarding the boot time I believe it may me somehow related to the BIOS doing something to the disk itself.
While I think that too (though I'm not convinced) as there's not much else that could do it. But I've not seen BIOS touch disks before, it should not care about that.

How can I explain what's going on when I run gpart recover to fix it and it remains alright until next boot?
Well, that's the answer I'm after too. We know now what is happening and why corruption occurs.

Also you didn't answer what kind of setup you have - what HW is it.
 
It's a DELL Latitude 6410 and it had a Toshiba HDD, now I changed that disk but can't remember the brand of the one inside now.
What kind of setup?
It's a FreeBSD setup with no redundancy and UEFI boot.
 
Setup as in HW setup. You didn't mention what type of HW you are running your FreeBSD on. Ok, so it's a notebook. What BIOS version do you have? It would be worth checking out if it has some sort of "smart" features that would be maybe messing up the GPT entries.

Do I understand/remember this correctly? You fix the GPT in FreeBSD (gpart recover), you reboot, UEFI boot, and you see the issue again?
Now this is a long shot but at this point I'd try to legacy boot it with your BIOS supports legacy boot. Maybe UEFI FW has some weirdness in.
 
That was the setup before and changed to UEFI to try if the boot gets any faster.
Just created a VM with Virtualbox and installed FreeBSD the problem does not happen there.
So this means it's the laptop BIOS doing something I bet.
I will roam around BIOS settings to check if there's something related to HDD MBR recovery/redundancy or something like that.
 
So this means it's the laptop BIOS doing something I bet.
Well yeah, we know it has to be your HW, FreeBSD is setting it properly. But it's still interesting to see what. Because BIOS should have no business touching disks. Maybe Dell did something "smart" but still..

Please let us know if you find something there.

It might be worth checking this out on Linux for example. Install it there on GPT partition (best to check how it sets that backup GPT header) and reboot to see if the issue occurs there too. If it's BIOS doing this funky business it should do it there too.
 
Btw. for your particular disk (that size) you can use just this one liner to check if that backup header is set correctly: dd if=/dev/ada0 bs=512 skip=976773167 status=none | perl -e 'read STDIN,$d,80; $a=unpack("Q", substr($d,72,8)); printf("GPT backup hdr: part entries start @ 0x%lx ( %d )\n", $a,$a)'

Code:
# dd if=/dev/ada0 bs=512 skip=976773167 status=none | perl -e 'read STDIN,$d,80; $a=unpack("Q", substr($d,72,8)); printf("GPT backup hdr: part entries start @ 0x%lx ( %d )\n", $a,$a)'
GPT backup hdr: part entries start @ 0x3a38600f ( 976773135 )
 
Back
Top