Other Mapping between physical disks, partitions and mount points (lsblk missing in FreeBSD)

Hello

I try to understand the relations between physical disks, partitions and the mount points. Until now, on non-FreeBSD OSs, I just used lsblk and got a perfect overview:

Code:
[CMD]$lsblk[/CMD]
NAME  MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda  8:0  0 931.5G  0 disk 
├─sda1  8:1  0  500M  0 part  /boot
└─sda2  8:2  0  931G  0 part 
  ├─vg_xldesk-lv_root (dm-0) 253:0  0    50G  0 lvm  /
  ├─vg_xldesk-lv_swap (dm-1) 253:1  0  17.7G  0 lvm  [SWAP]
  └─vg_xldesk-lv_home (dm-2) 253:2  0   1.8T  0 lvm  /home
sdc  8:32  0 232.9G  0 disk 
└─sdc1  8:33  0 232.9G  0 part 
  └─md1  9:1  0 232.9G  0 raid10 /data
sdb  8:16  0 931.5G  0 disk 
└─sdb1  8:17  0 931.5G  0 part 
  └─vg_xldesk-lv_home (dm-2) 253:2  0   1.8T  0 lvm  /home

All disks, partition labels, and mount points are listed. Is there anything similar on FreeBSD?

At the moment, I miss the link between a mount point (e.g. /) and the related disk and partition.

Thanks a lot for any help.
Kind regards,
Tom
 
At the moment, I miss the link between a mount point (e.g. /) and the related disk & partition.
Simply look at the output of mount:
Code:
dice@j2:~> mount
/dev/ad4s1a on / (ufs, local)
devfs on /dev (devfs, local, multilabel)
/dev/ad4s1d on /tmp (ufs, local, soft-updates)

That means that / is mounted on the first disk; ad4, first slice; s1 and first partition; a.
For /tmp it's mounted from the first disk; ad4, first slice; s1 and third partition; d. Yes, I'm not counting wrong, the c 'partition' isn't really a partition, it always spans the whole slice.
 
Or look at the output of df
Code:
 > df -HT
Filesystem    Type     Size    Used   Avail Capacity  Mounted on
/dev/ada0s4a  ufs      780M    445M    273M    62%    /
devfs         devfs    1,0k    1,0k      0B   100%    /dev
/dev/ada0s4b  ufs      780M    282M    435M    39%    /var
/dev/ada0s4d  ufs      520M    4,3M    474M     1%    /tmp
/dev/ada0s4e  ufs       15G    2,1G     11G    16%    /usr
/dev/ada0s4f  ufs      4,2G    426M    3,4G    11%    /home
 
Good evening SirDice and getopt

Thank you very much for your answers! I wrote my question because most of my entries do not have the disk-informations and I missed the other one.

Because of your answers, I assume that devfs and tmpfs are virtual file systems - but don't do they have a phyiscal mapping?

Here are those usages in the mount example:
Code:
> mount
devfs on /dev (devfs, local, multilabel)
tmpfs on /etc (tmpfs, local)
tmpfs on /mnt (tmpfs, local)
tmpfs on /var (tmpfs, local)
or the df output:
Code:
> df -HT
Filesystem  Type  Size  Used  Avail Capacity  Mounted on
devfs  devfs  1.0k  1.0k  0B  100%  /dev
tmpfs  tmpfs  33M  5.6M  28M  17%  /etc
tmpfs  tmpfs  4.2M  8.2k  4.2M  0%  /mnt
tmpfs  tmpfs  5.4G  312M  5.1G  6%  /var
Thank you very much,
Kind regards,
Tom
 
Or look at the output of df
[...]
That works if using UFS but not ZFS. mount(8) is probably a better choice or you could use
gpart(8)
Code:
% gpart show -p ada0
  34            234441581    ada0  GPT  (112G)
  34                    6          - free -  (3.0K)
  40                 1024  ada0p1  freebsd-boot  [bootme]  (512K)
  1064                984          - free -  (492K)
  2048            8388608  ada0p2  freebsd-swap  (4.0G)
  8390656       222298112  ada0p3  freebsd-zfs  (106G)
  230688768       3752847          - free -  (1.8G)
%
 
Hi protocelt,

Thank you very much for this note!:
That works if using UFS but not ZFS. mount(8) is probably a better choice or you could use
gpart(8)

Unfortunately, gpart show -p ada0 has different drawbacks:
  • I do have to manually ask for each disk. (I would like to have an overview with one command without interpreting results from previously called commands)
  • Therefore I need to know which disks are in this system.
  • It does not show the mount points.
... therefore I love lsblk: it just works with every file system and displays everything we need to have a complete disk-overview.
Kind regards,
Tom
 
gpart show -p will show all attached disks and their partition information however it's true gpart(8) won't show a directory tree on the disk partitions. I'm not sure there is an equivalent tool to lsblk for FreeBSD. You may be able to cobble a script together that uses a few existing tools to get the information and format it the way you want. :)
 
You may be able to cobble a script together that uses a few existing tools to get the information and format it the way you want.
Yes, that's exactly what I try to do :-)
Unfortunately, I still do not now how to handle devfs and tmpfs: it looks like they are mounted to nothing :-(
 
Because of your answers, I assume that devfs and tmpfs are virtual file systems - but don't do they have a phyiscal mapping?
See devfs(5) and tmpfs(5).

https://en.wikipedia.org/wiki/Device_file#devfs said:
devfs is a specific implementation of a device file system on Unix-like operating systems, used for presenting device files. The underlying mechanism of implementation may vary, depending on the OS.

Maintaining these special files on a physically implemented file system (i.e. harddrive) is inconvenient, and as it needs kernel assistance anyway, the idea arose of a special-purpose logical file system that is not physically stored.

Also defining when devices are ready to appear is not entirely trivial. The 'devfs' approach is for the device driver to request creation and deletion of 'devfs' entries related to the devices it enables and disables.
 
Hi getopt,
thanks a lot for your links and the wiki quotation!
I already have read them but I'm still not sure about how they are handled:

devfs(5):
  • I assume devfs is mounted to a physical drive / partition.
    How can I recoginize it?
tmpfs(5):
  • Is tmpfs always mounted to an internal RAM Device?
    I'd like to write a script which tells the truth, so I don't lead users to wrong directions :)
Kind regards, Tom
 
I assume devfs is mounted to a physical drive / partition.
It is not. It is a pseudo file-system, maintained by the kernel. The entries you see in there as "files" are actually nothing but references to kernel data structures that would be usable as devices.

How can I recoginize it?
You can recognize that is is not a simple file system that uses a single block device, because the first entry in the output of mount is not a real existing block device:
Code:
devfs on /dev (devfs, local, multilabel)

Is tmpfs always mounted to an internal RAM Device?
As far as I know, yes. In some other OS I used a long time ago, there was a RAM file system that could also optionally use a larger block device, but FreeBSD doesn't seem to have that: if you do man tmpfs, it says explicitly that it is an "efficient memory file system", which seems to indicate that it is only a memory file system. The example mount command shows that it doesn't use a block device either.

I'd like to write a script which tells the truth, so I don't lead users to wrong directions
Define "truth".

In the old days, there was only one type of file system: It took a single block device, and turned it into a file system. The whole mindset of mount and ancillaries like df is built around that.

Then came various memory-based file systems, and later various kernel-data-structure based file systems, such as /proc, the /dev file system you saw above, and on Linux also /sys. Those were file systems that had no block device, so they used a convenient fake thing. Then came NFS (which is actually not a file system, only a network-based remote file system access system), where the thing that used to be a block device is actually the node name of the server and the export name from that server (often identical to the file system name on the server).

On the block device side, the world is also complicated. For example, say you use an operating system that supports both multipath access to disks, and that supports a logical volume manager. The file system will say that it is mounted on /dev/dm-3, but in reality that is two fibre-channel connections to the same physical disk, which is visible as /dev/sdn and /dev/sdx (I'm using Linux naming conventions here, I'm not familiar with the FreeBSD multipath implementation). To get the "truth", you need to know to use DM-specific commands to map /dev/dm-3 to the underlying block devices. Similarly, you may have a file system on logical volume /dev/lv17, which in reality is implemented as a 100GB slice of physical volume group /dev/vg03, which in turn contains 4 disks /dev/sdd, /dev/sde, /dev/sdf and /dev/sdx. Again, lots of commands are required to get to the "truth".

There is another aspect of "truth" we haven't really discussed. You seem to want a command that shows that file system /home is physically located on partition /dev/ada2p4. But I don't like that naming convention. What I really want to tell users is that it is /dev/gpt/data_home, which shows that this is the partition I named "home" on the disk I named "data" (to distinguish it for example from the partition I named "usr" on the same disk, or from the SSD I named "boot"). But another important fact is that this disk is really a Hitachi model HDS5C3030ALA630 with serial number MJ0351YNG9RZ6A. On a system with multiple disks, that is very important to know, because I may have a dozen physical disks that look vaguely similar, of which a half dozen are the same Hitachi model, but only one has this particular serial number (so if need be, I can find the physical artifact). On a sufficiently large system an important part of the "truth" is the WWN of the disk (5000cca228c46d95, which is how to identify the disk when speaking to a SAS expander or FC switch), and if the disk is mounted in an enclosure, the serial number of the enclosure, and the position of the disk in the enclosure (for example "4nd drawer from the bottom, 2nd row from the front, 3rd disk from the left", if I'm using the Netapp DE6600 disk enclosure). For a large system, the "truth" has to include information required to physically locate and identify drives.

Let's not even talk about external RAID arrays: What looks to the computer to be a single block device (perhaps accessible via multipath) is in reality dozens or hundreds of disk drives, and to discover the "truth", you need to get into their management systems (which can be heinously complicated, and are usually not user-friendly).

But today the world is much more complicated. For example, look at the output of mount for a ZFS file system: Where you usually expect the block device in the output of mount, it has a name that is clearly not a block device (I think it is the ZFS pool name, but I'm not an expert on ZFS). To find out the corresponding block device, you have to use a command such as zpool. And the interesting is: In the output you may find disks that are not even currently connected, and are not currently accessible as block devices (but were here yesterday).

And with seriously complicated file systems, the output of mount becomes seriously inadequate. You can have a single GPFS file systems with tens of thousands of physical disks. If you look at the output of mount there, you see a single fake block device, and in the output of df you will see a very large number for the free space. To really find out which real devices are in use, what hosts they are connected to, what their states are, whether they are currently reachable or being allocated on, mirrored or remote-mirrored or not, if they are internally RAID-ed what the state of the redundancy is, and what physical locations they are installed in, you need to learn to use about a dozen commands, and on a large system parse tens of thousands of lines of outputs (or use a summary command).

So, you want to tell users the "truth" ?

My suggestion: Have them use mount and df. That will tell them the truth, as seen from the view of the real file system that is running.

For the memory or kernel-data-structure based file systems, there isn't anything else. For the simple single-disk file systems like UFS, ext2 and NTFS, teach them that the first entry in the output of mount is a block device, and with very simple rules you can map from a partition or slice to the whole disk (/dev/sda4 -> /dev/sda or /dev/ada0p4 -> /dev/ada0). For multipath / volume-managed and encrypted disks, the block device is the "truth" (it is an accessible block device, and you can use dd to destroy your file system), but there are additional block devices hidden underneath. For NFS and friends, you have to teach them to decode the first entry into a host name and export name. And for interesting file systems (ZFS, QFS, Lustre, GPFS, ...), the "truth" is too complicated for a script to capture.
 
Hi ralphbsz

Thank you very much for your answer and taking all the time! I very appreciate your help!
Define "truth".
In this context, I define the truth this way:
Create a tool similar to lsblk which collects and displays all useful informations about all drives, partitions, mount points and file systems in the system.
My suggestion: Have them use mount and df. That will tell them the truth, as seen from the view of the real file system that is running.
This tool addresses 1st and 2nd level supporter - and not system engineers. They should not have to analyze different command-results and should not know that zfs is handled different than other file systems just to answer "simple" questions.
I'm pretty sure there are many FreeBSD Admins who are regularly and manually doing exactly the same, what a tool like lsblk already does. And we all know: there are good reasons to let the computer do, what we are doing manually :)

My motivation is simple: We have only a few FreeBSD machines and they are too stable! (If we had problems, then we regularly would have to deal with technical details ;-))
Therefore, if we have questions about a disk system, then we always have problems to find an anser: we try different commands, have to use man, goolge and forums to try to get an answer. After all, we try something and we are not sure, if it's right...
... and all we need is a tool like lsblk which tells the truth aboud the disks, partitions, mount points and file-systems.

Because we usually have high-level questions:
  • Which disk do we have to change because / is running out of space?
  • Can we resize the partition where /xxx is mounted to get more space? (Which other partitions are on this disk? Which file system?)
  • Is /xxx on a zfs?
I would like to create a tool to answer such questions without the constraint to study / know many "internals".

btw. this thread is a good example that the shell commands are too complex: it's error-prone and complex to manually collect all informations from different commands to get the simple and very useful overview of lsblk:
Code:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 931G 0 part
├─vg_xldesk-lv_root (dm-0) 253:0 0 50G 0 lvm /
├─vg_xldesk-lv_swap (dm-1) 253:1 0 17.7G 0 lvm [SWAP]
└─vg_xldesk-lv_home (dm-2) 253:2 0 1.8T 0 lvm /home
sdc 8:32 0 232.9G 0 disk
└─sdc1 8:33 0 232.9G 0 part
└─md1 9:1 0 232.9G 0 raid10 /data
sdb 8:16 0 931.5G 0 disk
└─sdb1 8:17 0 931.5G 0 part
└─vg_xldesk-lv_home (dm-2) 253:2 0 1.8T 0 lvm /home
btw: I already tried to fork the lsblk tool - and I stopped because I was not able to answer simple questions after reading many wiki pages for some hours.

Actually, I try to use perl to solve this problem because perl is portable and has a good system acces....
 
btw. this thread is a good example that the shell commands are too complex
No. The world is too complex. The shell commands simply reflect the complexity of the world.

There are many file systems. They are optimized for different things. Some are really easy to use, but can handle only a single block device. Some require weeks of training, and have several hundred administrative commands, but they can do a lot of things that UFS or ext2 can't do. And because they are all different, they have different shell commands to administer them (some also come with GUIs), and that's where the complexity comes from.

Because we usually have high-level questions:
  • Which disk do we have to change because / is running out of space?
  • Can we resize the partition where /xxx is mounted to get more space? (Which other partitions are on this disk? Which file system?)
  • Is /xxx on a zfs?

These are all good questions. I see it as feasible to write a set of scripts (around mount, df, gpart and so on) that can answer these questions, and can do simple administrative tasks (like creating and expanding file systems and partitions). But I think this is only reasonable to do for simple file systems (such as UFS) that use single block devices. Once you are into the realm of file systems (such as ZFS) that can aggregate multiple block devices, and that have internal data layout (for example RAID built in), and once you are into tools such as LVM and multi path that can hide real block devices, I think using scripts around these systems becomes too complicated. At that point, I think you are better off doing two things: standardize on one or a small set of file systems (for example, declare that only ZFS shall be used on FreeBSD), and train your users to understand ZFS concepts and handle its commands.

You can hide complexity, but it will still exist. As long as the tools you write are perfect, and never make a mistake, and can handle any possible situation, the complexity can remain hidden. For the simple file systems like UFS, writing such scripts around them may be sensible, because it integrates the use of partitioning tools and file system tools into one common set of commands that have a consistent surface to the user. But: the moment a user has to go around your tools, the complexity is back to being visible. And at that point, the administrator is ill-prepared to deal with the file systems (for lack of training and practice), and they have to fight the fact that the system was set up by a tool that is opaque to them. Here is a thought example you can use to test whether you designed your tools sensibly: say one disk is intermittent, works fine for an hour, creates lots of IO errors for an hour, and then is completely broken for the third hour, and then goes back to working fine. Are you sure your tools will do the right thing in this situation? So make sure your scripts and tools are perfect, or face the consequences.
 
Because we usually have high-level questions:
...
I would like to create a tool to answer such questions without the constraint to study / know many "internals".
Such tools do exist: Proper documentation of IT infrastructure/systems provide answers of such kind. An intranet wiki/knowledge-base/ticket-system might give access. It also has answers to questions like I did solve that some time ago but don't remember it any more?
 
Back
Top