Server layout: rootfs on USB flash drive or NVMe?

I said:
I believe that I did. I said "You might want to provision storage to a VM which is not ZFS on the server side". It's all about avoiding COW/COW. ZFS is COW. If your client is COW, then you may want UFS on the server side, and you may want a mirror for redundancy on the server. [You can still have ZFS, and all its advantages, on the client side.]
You said:
Yes, exactly! Note that the point of interest is OUTSIDE your quotation marks :)
And you had this silent assumption that I knew that COW/COW should be avoided, but I didn't.
Silent assumptions, unspoken agreements, whatever you call it: these beasts are one of the most nasty insidious pitfalls in human communication (and sadly in software development, too).
I had raised the issue of "avoiding COW/COW" in a an earlier post:
I would try place my important VMs on the SSDs. But beware, you don't want both the hypervisor and the VM client using copy-on-write file systems.
 
1774576172734.png
 
gpw928 said:
I would try place my important VMs on the SSDs. But beware, you don't want both the hypervisor and the VM client using copy-on-write file systems.
  1. See, You wrote "You don't want CoW/CoW" several times (3? 4?), but not WHY. Thatŝ the POI. Why should I (or anyone else) blindly follow your advise, when it doesn't include at least a link where I can read an explanation or you give a short outline of the reasoning yourself, even a few keywords are often enough: in this case, write amplification would have triggered my memory instantly.
  2. When I asked for an explanation WHY (two times), you just repeated *I already wrote that you don't want CoW/Cow" or "see my explanation above on CoW/CoW" -- which was NOT an explanation, instead a statement like "it is a fact" -- which, naturally, makes you suspicious, and is a reason NOT to follow the advise of s/o who is either not able or not willing to answer the question about the WHY.
Fortunately the fog has lifted.
 
Status update & a nice argument to reuse & buy used parts:
  • When you buy your equipment in internet auctions, watch out for the details!
    I searched for M.2 NVMe 1.3 SSD, because the target board has only PCI 3 x 4, so pluging in more modern faster devices wouldn't make sense.
    I decided 2 SSDs à 256 GB would fit my needs. But on the photo of the device you can see it's actually 1024 GB (when you enlarge it), so I choosed "buy now" and payed a little bit more. Now I have a 1 TB NVMe 1.3 SSD for the price of 256 GB. :)
NOTE: NO ONE WILL BE INTERESTED! THIS IS KIND OF PERSONAL LOG!
If you read so far, You may consider to read more interesting topics in the forum.
OK now I'm going to shuffle the 3 SATA SSD's I have and do the following:
  1. build the 1 TB NVMe SSD (arrived today) into the server beside it's 256 GB counterpart and create the partitions for swap, the support vdevs (2x cache, log, special) for the 2 zpools on the bigger HDDs (only one ATM), a geom mirror, and maybe a zpool (mirrored).
    I'll decide later how to use the free ~700 GB of the bigger SSD.
  2. Install FreeBSD 15-STABLE on my old laptop with one of the 2 small SATA SSDs that I bought to revive it, plus KDE so I have a GUI.
  3. install FreeBSD-15-STABLE onto the other old SATA SSD in an external USB 3.x SATA case; this one will go into my main laptop later, plus KDE so I can start with a GUI quickly.
  4. pull the 1 TB SATA SSD out of my main laptop and put in into the server.
  5. Put the SATA SSD from step 3 into my main laptop, hopefully the GUI doesn't need any more configuration so I have graphical Internet access etc. instantly. Time w/o working GUI & internet access is zero because of the revived old laptop.
  6. Log into the server via ssh(1)
    The server still has no OS installed, it is booted from the tweaked FBSD installer that starts sshd(8)
  7. Create the partitions on the HDD: swap, a mirrored and a striped zpool
  8. Mirror (resilver) the zpool on the HDD from the 1 TB SATA SSD (has been in my main laptop). The mirror side on the HDD will naturally be much larger than needed for now.
  9. export the it: zfs sharenfs=on pool/data-mirror
  10. Mount the old data via NFS from the server on the two laptops
 
Back
Top