Search results

  1. H

    ipv6 -host route to on-link gateway fails

    I know this thread is a bit old, but I'm also on OVH running 11.0-RELEASE-p15 and I get the error others have noted. One of my machines is quite happy to ping their suggested test address while the other returns with: ping6 -c 4 2001:4860:4860::8888 PING6(56=40+8+8 bytes) 2607:5300:60:2db2:: -->...
  2. H

    ZFS Panic Galore

    Beeblebrox, thanks for your help. Mounting the system as read-only allowed me to read all the data from the volume. 100% of it was recovered and no files were corrupted. I've kept an image of the damaged filesystem so on my own time I can try and find/fix the bug in the kernel.
  3. H

    ZFS Panic Galore

    Yes, good point, I didn't think of that. As soon as the dd has completed it's 450Gb dump I'll try it out.
  4. H

    ZFS Panic Galore

    t1066: I have tried this a number of times. As soon as ZFS starts up the kernel panics. Looking at the backtrace it seems to happen as soon as the system tries to auto-scrub. I was looking at zpool scrub -s but I think it will be a race. As soon as the filesystem copy is done I'll take a few...
  5. H

    ZFS Panic Galore

    Please see post #25 zpool import -f -R /altroot 10433152746165646153 olddata The only reason there are 2 pools named email is that one is the original that I'm trying to get the data off.
  6. H

    ZFS Panic Galore

    In process: dd if=/dev/ad16s1g > zfsimage.dat Hope I'm doing it right... 2 is done albeit a bit complicated since I have 2 pools named email. zpool list Does not show the other pool, but zpool import shows pool: email id: 10433152746165646153 state: ONLINE status: The pool was last...
  7. H

    ZFS Panic Galore

    Thanks for your help. I'll take it up with them. In the meantime I'm dumping the filesystem to a file to test it like that. If it still croaks I'll sign up for FreeBSDCon and take it down there on an external drive.
  8. H

    ZFS Panic Galore

    zpool import -f -R /mnt 10433152746165646153 cannot import 'email': pool already exists zpool import -f -R /altroot 10433152746165646153 cannot import 'email': pool already exists Wasn't sure if the altroot was an option or path... zpool import -f -R /altroot 10433152746165646153 olddata...
  9. H

    ZFS Panic Galore

    Would you suggest I run zpool import -f 10433152746165646153 ? Nearly every command I've tried so far on this thing has resulted in a kernel panic. So far I've tried 8.3 32bit and 8.3 64bit and now 9.0 64 bit with very consistent panics.
  10. H

    ZFS Panic Galore

    Running import lists only 1 pool. pool: email id: 10433152746165646153 state: ONLINE status: The pool was last accessed by another system. action: The pool can be imported using its name or numeric identifier and the '-f' flag. see: http://www.sun.com/msg/ZFS-8000-EY config...
  11. H

    ZFS Panic Galore

    How do you tell zfs to look on that disk for the partition? If I do a zfs list here is what I get: zfs list NAME USED AVAIL REFER MOUNTPOINT email 118M 356G 118M /email That's my new email zfs filesystem living on /dev/ad0s1g. As you can see from my feeble attempts above I...
  12. H

    ZFS Panic Galore

    Ok, bit the bullet. Backed up my /email and tried it. Looks like there must be a bug in the kernel. zpool import -f email ad16s1g
  13. H

    ZFS Panic Galore

    I'm still having problems with this zfs thing. I upgraded to 9.0-release. The replacement filesystem has a ZFS pool called email. I need to find a way to access the email zfs pool from the old disk which is installed in the system. It's on /dev/ad16s1g whereas the "good" zfs system is on...
  14. H

    ZFS Panic Galore

    It would appear that I'm running version 14. I thought I read somewhere that 8.3 was supposed to be version 28. I was planning to upgrade it to 9 so I'll try that and see if it's able to read the 32 bit filesystem that way. Just for kicks I tried to rebuild the kernel (dual booted to 32 bit)...
  15. H

    ZFS Panic Galore

    Hmmm... Correction it looks like I still need to build/install to get the version up to date.
  16. H

    ZFS Panic Galore

    The swap was on a slice. There was only 1 ZFS partition. FreeBSD cl-t153-284cl 8.3-RELEASE FreeBSD 8.3-RELEASE #0: Mon Apr 9 21:23:18 UTC 2012 root@mason.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC amd64 zpool import -f -R /dev/ad16s1g email Unexpected XML: name=stripesize data="0"...
  17. H

    ZFS Panic Galore

    We completely replaced the hardware. Same issue. I suspect it may be a bug in the ZFS system where it's unable to deal with some sort of filesystem corruption in the ZFS. There are now 2 drives in the system, one with FreeBSD 64 bit and the original. Any suggestions on how I might try to mount...
  18. H

    ZFS Panic Galore

    All of the smart tests passed. I have arranged to have a fresh hard drive installed with a fresh copy of 64 bit FreeBSD on it. I'll see if I can use that to read the ZFS partitions.
  19. H

    ZFS Panic Galore

    The kernel is a GENERIC 8.3-RELEASE-p1 as of May 4th 2012. The only startup options are vm.kmem_size="512M" vm.kmem_size_max="512M" vfs.zfs.arc_max="160M" These were taken from the ZFSTuningGuide. I also did try recompiling the kernel with the KVA_PAGES options but it would panic...
  20. H

    ZFS Panic Galore

    Here is another dump. Hardware problem? I just had the RAM replaced with fresh stuff and it happened again.
Back
Top