I assume devfs is mounted to a physical drive / partition.
It is not. It is a pseudo file-system, maintained by the kernel. The entries you see in there as "files" are actually nothing but references to kernel data structures that would be usable as devices.
You can recognize that is is not a simple file system that uses a single block device, because the first entry in the output of
mount is not a real existing block device:
Code:
devfs on /dev (devfs, local, multilabel)
Is tmpfs always mounted to an internal RAM Device?
As far as I know, yes. In some other OS I used a long time ago, there was a RAM file system that could also optionally use a larger block device, but FreeBSD doesn't seem to have that: if you do
man tmpfs, it says explicitly that it is an "efficient memory file system", which seems to indicate that it is only a memory file system. The example mount command shows that it doesn't use a block device either.
I'd like to write a script which tells the truth, so I don't lead users to wrong directions
Define "truth".
In the old days, there was only one type of file system: It took a single block device, and turned it into a file system. The whole mindset of
mount and ancillaries like
df is built around that.
Then came various memory-based file systems, and later various kernel-data-structure based file systems, such as
/proc, the
/dev file system you saw above, and on Linux also
/sys. Those were file systems that had no block device, so they used a convenient fake thing. Then came NFS (which is actually not a file system, only a network-based remote file system access system), where the thing that used to be a block device is actually the node name of the server and the export name from that server (often identical to the file system name on the server).
On the block device side, the world is also complicated. For example, say you use an operating system that supports both multipath access to disks, and that supports a logical volume manager. The file system will say that it is mounted on
/dev/dm-3, but in reality that is two fibre-channel connections to the same physical disk, which is visible as
/dev/sdn and
/dev/sdx (I'm using Linux naming conventions here, I'm not familiar with the FreeBSD multipath implementation). To get the "truth", you need to know to use DM-specific commands to map
/dev/dm-3 to the underlying block devices. Similarly, you may have a file system on logical volume
/dev/lv17, which in reality is implemented as a 100GB slice of physical volume group
/dev/vg03, which in turn contains 4 disks
/dev/sdd,
/dev/sde,
/dev/sdf and
/dev/sdx. Again, lots of commands are required to get to the "truth".
There is another aspect of "truth" we haven't really discussed. You seem to want a command that shows that file system
/home is physically located on partition
/dev/ada2p4. But I don't like that naming convention. What I really want to tell users is that it is
/dev/gpt/data_home, which shows that this is the partition I named "home" on the disk I named "data" (to distinguish it for example from the partition I named "usr" on the same disk, or from the SSD I named "boot"). But another important fact is that this disk is really a Hitachi model HDS5C3030ALA630 with serial number MJ0351YNG9RZ6A. On a system with multiple disks, that is very important to know, because I may have a dozen physical disks that look vaguely similar, of which a half dozen are the same Hitachi model, but only one has this particular serial number (so if need be, I can find the physical artifact). On a sufficiently large system an important part of the "truth" is the WWN of the disk (5000cca228c46d95, which is how to identify the disk when speaking to a SAS expander or FC switch), and if the disk is mounted in an enclosure, the serial number of the enclosure, and the position of the disk in the enclosure (for example "4nd drawer from the bottom, 2nd row from the front, 3rd disk from the left", if I'm using the Netapp DE6600 disk enclosure). For a large system, the "truth" has to include information required to physically locate and identify drives.
Let's not even talk about external RAID arrays: What looks to the computer to be a single block device (perhaps accessible via multipath) is in reality dozens or hundreds of disk drives, and to discover the "truth", you need to get into their management systems (which can be heinously complicated, and are usually not user-friendly).
But today the world is much more complicated. For example, look at the output of mount for a ZFS file system: Where you usually expect the block device in the output of
mount, it has a name that is clearly not a block device (I think it is the ZFS pool name, but I'm not an expert on ZFS). To find out the corresponding block device, you have to use a command such as
zpool. And the interesting is: In the output you may find disks that are not even currently connected, and are not currently accessible as block devices (but were here yesterday).
And with seriously complicated file systems, the output of
mount becomes seriously inadequate. You can have a single GPFS file systems with tens of thousands of physical disks. If you look at the output of
mount there, you see a single fake block device, and in the output of
df you will see a very large number for the free space. To really find out which real devices are in use, what hosts they are connected to, what their states are, whether they are currently reachable or being allocated on, mirrored or remote-mirrored or not, if they are internally RAID-ed what the state of the redundancy is, and what physical locations they are installed in, you need to learn to use about a dozen commands, and on a large system parse tens of thousands of lines of outputs (or use a summary command).
So, you want to tell users the "truth" ?
My suggestion: Have them use
mount and
df. That will tell them the truth, as seen from the view of the real file system that is running.
For the memory or kernel-data-structure based file systems, there isn't anything else. For the simple single-disk file systems like UFS, ext2 and NTFS, teach them that the first entry in the output of mount is a block device, and with very simple rules you can map from a partition or slice to the whole disk (/dev/sda4 -> /dev/sda or /dev/ada0p4 -> /dev/ada0). For multipath / volume-managed and encrypted disks, the block device is the "truth" (it is an accessible block device, and you can use
dd to destroy your file system), but there are additional block devices hidden underneath. For NFS and friends, you have to teach them to decode the first entry into a host name and export name. And for interesting file systems (ZFS, QFS, Lustre, GPFS, ...), the "truth" is too complicated for a script to capture.