listing .zfs dir - kernel panic repeatable

Can anyone confirm the following causes a kernel panic?

1 - create a zfs snapshot with zfs snapshot command.
2 - change to the .zfs directory.
3 - run ls.

If I reboot the server, going to the .zfs directory and running ls causes another panic so it wasnt a random event.

8.1-STABLE zfsv15

--update--

I can list .zfs/snapshots fine so its not a recursive bug. It may also not be related to snapshots specifically, I havent tested yet if listing .zfs on a fileset with no snapshots also causes a panic.
 
Didn't have crash... (zpool v14)
However, I should warn you that you are running -STABLE branch, which is work in progress.
You should use 8.1-RELEASE
 
zfsv15 is a requirement for this server, so is a specific reason its running STABLE.

thanks for confirming it doesnt happen on zfsv14.
 
Works here using ZFSv14 on 64-bit FreeBSD 7.3 and 8.1, and 32-bit 8-STABLE.

Don't have any ZFSv15 installs to test on.

Is the snapdir property for that filesystem set to hidden or visible?
 
killasmurf86 said:
Didn't have crash... (zpool v14)
However, I should warn you that you are running -STABLE branch, which is work in progress.
You should use 8.1-RELEASE

Well, didn't have any problems running stable since 5.x. But of course it's possible, especially if you're not following svn-src-stable-8. That said, I couldn't run 8.1 at all, there are way too much bugs in it. Especially 8-stable fixes a plethora of nasty bugs regarding stability in ZFS. Blindly updating at the moment is dangerous:

We're a bit under a week from Code Freeze for the upcoming 8.2-RELEASE cycle. Warn people tracking stable/8 that the branch may be more active than usual.

http://lists.freebsd.org/pipermail/svn-src-stable-8/2010-November/004151.html
 
akitaro said:
Just tested on RELENG_8/i386 with zfs v15 - no crash.

ok this is interesting.

the property is set to hidden.

Just done another test on another fileset and panic'd again.

I need to be careful as is a production server :)

so I did on tank/home (/usr/home)

Code:
cd /usr/home
cd .zfs
ls snapshot (works)
ls (panic)

here is properties from new fileset. same as tank/home except maybe on the exec and suid.

Code:
NAME          PROPERTY              VALUE                  SOURCE
tank/zfstest  type                  filesystem             -
tank/zfstest  creation              Tue Nov 23  5:35 2010  -
tank/zfstest  used                  136K                   -
tank/zfstest  available             834G                   -
tank/zfstest  referenced            114K                   -
tank/zfstest  compressratio         1.06x                  -
tank/zfstest  mounted               yes                    -
tank/zfstest  quota                 none                   default
tank/zfstest  reservation           none                   default
tank/zfstest  recordsize            128K                   default
tank/zfstest  mountpoint            /usr/home/zfstest      local
tank/zfstest  sharenfs              off                    default
tank/zfstest  checksum              fletcher4              inherited from tank
tank/zfstest  compression           lzjb                   local
tank/zfstest  atime                 off                    local
tank/zfstest  devices               on                     default
tank/zfstest  exec                  off                    local
tank/zfstest  setuid                off                    local
tank/zfstest  readonly              off                    default
tank/zfstest  jailed                off                    default
tank/zfstest  snapdir               hidden                 default
tank/zfstest  aclmode               groupmask              default
tank/zfstest  aclinherit            restricted             default
tank/zfstest  canmount              on                     default
tank/zfstest  shareiscsi            off                    default
tank/zfstest  xattr                 off                    temporary
tank/zfstest  copies                1                      default
tank/zfstest  version               4                      -
tank/zfstest  utf8only              off                    -
tank/zfstest  normalization         none                   -
tank/zfstest  casesensitivity       sensitive              -
tank/zfstest  vscan                 off                    default
tank/zfstest  nbmand                off                    default
tank/zfstest  sharesmb              off                    default
tank/zfstest  refquota              none                   default
tank/zfstest  refreservation        none                   default
tank/zfstest  primarycache          all                    default
tank/zfstest  secondarycache        all                    default
tank/zfstest  usedbysnapshots       22.5K                  -
tank/zfstest  usedbydataset         114K                   -
tank/zfstest  usedbychildren        0                      -
tank/zfstest  usedbyrefreservation  0                      -

--update--

Ok I found cause of the problem, since the panic was saying gnuls, I tried it again with /bin/ls and no panic. So clearly the gnuls binary does something incompatible with .zfs. I will contact the maintainer of the gnuls port to ask him to mark as unstable with zfs. Thanks for the help guys.
 
I have a question: Why were you using gnuls instead of ls?
You know that ls has -G switch, which makes output colorised. ;) And you can adjust colors.....
 
killasmurf86 said:
I have a question: Why were you using gnuls instead of ls?
You know that ls has -G switch, which makes output colorised. ;) And you can adjust colors.....
That's a good question, but mine is the misrepresentation of the problem. At least the OP redeemed themselves by posting correct info and resolution, but providing accurate environment information is critical.
 
killasmurf86 said:
I have a question: Why were you using gnuls instead of ls?
You know that ls has -G switch, which makes output colorised. ;) And you can adjust colors.....

didn't know about the -G many thanks, it was for the colours.
 
oliverh said:
Well, didn't have any problems running stable since 5.x. But of course it's possible, especially if you're not following svn-src-stable-8. That said, I couldn't run 8.1 at all, there are way too much bugs in it. Especially 8-stable fixes a plethora of nasty bugs regarding stability in ZFS. Blindly updating at the moment is dangerous:



http://lists.freebsd.org/pipermail/svn-src-stable-8/2010-November/004151.html

I updated the src during a quiet period shortly after zfsv15 got merged in, code is from september. Generally past experience has given me little problems with STABLE branches, however I do tend to use RELEASE as much as possible unless I have a specific reason to need to use newer code. I never use CURRENT.

I also dont like using .0 releases, so another time I use STABLE is after a .0 release, as it tends to get a lot of fixes at that point.
 
Now all you have to do is alias ls to ls -G :D
and set set LSCOLORS environment variable the way you like
here's mine
Code:
LSCOLORS=DxGxGxCxBxexcxbxbxFxFb
 
guys I got it wrong, its not gnuls specifically.

its when listing hidden files.

can the guys who tested test again but with 'ls -aG'

--update--

could be the colours crashing it, I cant test again now until later, but ls -G also caused a panic.

also this machine is actually 8.1 release-p1 patched with zfsv15, I made a mistake on that. When it panics the press any key to reboot has no response and ctrl-alt-del also has no response, I have to reboot via APC.
 
just installed 8.1 mfsbsd zfsv15 image on a VM, i386 (although the server amd64), is clean with no tuning whatsoever and default zfs fileset options.

ls switch option used
-a no panic
-G no panic (this caused the live server to panic)
-F panic (dont even know what this does but I hit F by accident when aiming for G and it caused panic) - repeated panics every time
-aG no panic (this caused the live server to panic)

the behaviour seems to not be consistent between installations but I have already managed to get a 2nd installation to panic, next I am going to install a 8.1 image with its default zfs (zfsv14).

--update-- no panic at all on zfsv14 VM installation.
 
chrcol said:
ok this is interesting.

the property is set to hidden.

That's the problem right there. Depending on the shell you use and how it stores path elements, running apps that work on paths will either fail (as you've noticed) or succeed (zsh works). Any builtins that work on path elements (pwd, pushd/popd, ls) will also fail.

If you want to routinely work in the .zfs/snapshot/ hierarchy, then you either need to use a more modern shell like zsh, or you need to # zfs set snapdir=visible <pool>/<filesystem>

There's no other way around it.

Search the -stable, -current, and -fs mailing list archives for this year if you want all the gory details.

If the shell (or other binary) you are running stores all the path elements in memory as you cd into the .zfs/snapshot/whatever/ hierarchy, then everything will work.

However, as soon as an app has to derive the path by going up or down the directory tree, and then things fail. When you have snapdir=hidden, then the .zfs is a pseudo-directory that doesn't actually exist in the path. Hence, why things like gnuls or whatnot fail.
 
phoenix said:
That's the problem right there. Depending on the shell you use and how it stores path elements, running apps that work on paths will either fail (as you've noticed) or succeed (zsh works). Any builtins that work on path elements (pwd, pushd/popd, ls) will also fail.

If you want to routinely work in the .zfs/snapshot/ hierarchy, then you either need to use a more modern shell like zsh, or you need to # zfs set snapdir=visible <pool>/<filesystem>

There's no other way around it.

A lot of your info was correct as zfs snapshots are sorta black magic but not the above bit. It's important to realize that default system utilities should work with supported OS features. That's a universal concept, not simply a FreeBSD one. If a base utility doesn't work with supported configuration, it's a bug and if it's a bug that can cause a panic it's a severe one. Unprivileged utilities, whether it be in base or ports, shouldn't be able to panic a kernel. Privileged ones shouldn't either really, but that opens up too many variables to realistically handle gracefully.

I first looked at the original zpool v15 patch and noticed it touched several relevant entries, then I tested the behavior in my CURRENT VM where it worked flawlessly on a v15 pool. Then I checked the PR database and found this:

PR kern/150544

which includes the full scope of the issue, and indicates the issue was caused by the initial import of zpool v15 and that the fix in now in STABLE.
 
so it looks like it may be fixed in STABLE?

here is what I just tried to do but its ended badly.

1 - install 8.1 with zfsv14 in VM
2 - upgrade to 8-STABLE which is currently 8.2-PRERELEASE
3 - upgrade to zpool 15 and zfs fileset v4

#3 failed, it paniced during zfs upgrade -a, on the reboot it shows its still on v3 but running upgrade -a again says its already updated. The pool is on v15 tho. ls still doesnt panic.

on the server (which is 64bit with 8 gig of ram) there is an extra file called 'shares' in the .zfs dir, it is not a directory, running ls -l just caused another panic tho, I wanted to see how big it was.

--update--

I set snapdir to visible on the server and still panics. Much more serious it now panics on boot (the clear /tmp rm command).
 
With the last bit info, there may be finally enough info to solidly identify your issue.

If you're saying that
Code:
ZFS filesystem version 3
and the zpool is version 15 then you have your problem. ZFS 4 contains changes to the disk format to accommodate new zpool properties meaning zpool 15 properties will not work on ZFS FS v3. You should verify and document that the steps you outlined are correct and reproducible then file a PR with the info you have created.
 
that was on the VM which is now fixed, the production server is already on v4.

do you have any idea why the production server would have .zfs/shares?

I did a PR hours ago and have no receipt for it yet, also the extra panics on the VM are due to kmem issues as its a 512meg ram 32bit installation. I am ignoring the extra panics and looking specifically at the ls panics.
 
Galactic_Dominator said:
Yes I do, and so would you if you read the PR I linked too.

I read it but don't understand most of it, I am no developer.

I can repeat whats going on with the VM now tho.

If I install freebsd using original code so 8.1 with zfsv14 there is no .zfs/shares/ and no ls panics.

If I install freebsd using the mfsbsd zfsv15 image it creates the filesystem with .zfs/shares/ in every fileset, ls -l etc. panics.

Since this is a problem that seems to originate right at fileset creation is it one that can be fixed without wiping and starting again?

I am about to patch this now to 8.1-STABLE and see what happens.
 
the VM is fine after update to latest STABLE code. Thats good news then.

The production server is also ok now, I booted it in single user mode to set snapdir back to hidden and it booted ok after that. I will update the code base on it when 8.2 is released.
 
Hrm, what's the .zfs/shares/ folder for?

Guess it's time to start reading up on more current versions of ZFS and all the goodies that come with it. :)
 
Back
Top