• This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn more.

Notebook with 2x SSD: advice?

cbrace

Active Member

Thanks: 6
Messages: 232

#1
Hi all,

I would like to install FreeBSD on an Asus Zenbook which has two 128GB SDD drives. I'm inclined towards a RAID1-type configuration.

Is this something that I should set up before the FreeBSD install? Or can the installer can deal with? That is to say, is the management of the two drives something I can do with ZFS?

Is it worth using ZFS for this installation? This notebook has 8GB of RAM and a fast processor (quad core i7), so presumably has the resources to manage ZFS; the question is whether it would be useful.

I have no experience with either FreeBSD on the desktop or ZFS. At the moment, I am leaning toward a PC-BSD install in order to get to get up to speed fairly quickly.

TIA
 

storvi_net

Active Member

Thanks: 25
Messages: 133

#2
The specs sound very good. With your HDD-Confg you want a ZFS Mirror and the amount of RAM is absolutly fine for 128 GB.

If FreeBSD or PC-BSD works with this notebook depends on the other hardware (graphics, network controller).
Probably you can name the exact model / specs, so the experts here can try to help you with information about the support.

Regards
Markus
 

cbrace

Active Member

Thanks: 6
Messages: 232

#3
Thanks. The unit is an Asus Zenbook UX301L. I've got Mint installed at the moment, and here is the lscpi output:
Code:
00:00.0 Host bridge: Intel Corporation Haswell-ULT DRAM Controller (rev 09)
00:02.0 VGA compatible controller: Intel Corporation Device 0a2e (rev 09)
00:03.0 Audio device: Intel Corporation Haswell-ULT HD Audio Controller (rev 09)
00:04.0 Signal processing controller: Intel Corporation Device 0a03 (rev 09)
00:14.0 USB controller: Intel Corporation Lynx Point-LP USB xHCI HC (rev 04)
00:16.0 Communication controller: Intel Corporation Lynx Point-LP HECI #0 (rev 04)
00:1b.0 Audio device: Intel Corporation Lynx Point-LP HD Audio Controller (rev 04)
00:1c.0 PCI bridge: Intel Corporation Lynx Point-LP PCI Express Root Port 1 (rev e4)
00:1c.3 PCI bridge: Intel Corporation Lynx Point-LP PCI Express Root Port 4 (rev e4)
00:1f.0 ISA bridge: Intel Corporation Lynx Point-LP LPC Controller (rev 04)
00:1f.2 RAID bus controller: Intel Corporation 82801 Mobile SATA Controller [RAID mode] (rev 04)
00:1f.3 SMBus: Intel Corporation Lynx Point-LP SMBus Controller (rev 04)
00:1f.6 Signal processing controller: Intel Corporation Lynx Point-LP Thermal (rev 04)
02:00.0 Network controller: Intel Corporation Wireless 7260 (rev 6b)
The video adapter offers 2560x1440px resolution, and that seemed like a nice idea when I bought it, but I've found that HiDPI support is rather uneven under Linux. A lot of webpages aren't properly rendered on Chrome and Firefox (weird issues with spacing etc.) Don't expect that to be any different with Mate etc on FreeBSD.

Edit: One thing I was hoping to be able to do was boot FreeBSD v10.1 and run Gnome or KDE from a USB stick to see what hardware support looked like from within a WM. But the FreeBSD USB images appear to be CLI only. Is this correct or am I missing something?
 

hukadan

Active Member

Thanks: 140
Messages: 235

#4
Given your output, it seems that you have an Haswell GPU. This is not supported yet by FreeBSD as shown here.
 

ondra_knezour

Aspiring Daemon

Thanks: 161
Messages: 709

#6
PC-BSD and GhostBSD are both able to run live graphic session from the CD/USB media AFAIK. FreeBSD installation/live system is text only, as you have already found.
 

wblock@

Administrator
Staff member
Administrator
Moderator
Developer

Thanks: 3,558
Messages: 13,856

#7
Many Haswell GPUs can be used with the vesa driver. It's not as good as full support, but better than no bitmap support at all.
 

hukadan

Active Member

Thanks: 140
Messages: 235

#9
If you want to try *BSD, you can still install DragonFlyBSD. Unless I am wrong, they support Haswell GPUs. I have not tried myself though.
 

eldaemon

New Member


Messages: 6

#10
I would not use ZFS on a laptop. You'd probably want at least 16GB of memory for ZFS, as well. I think a GEOM mirror and UFS would be best. However, be careful to align the partitions to 4k sectors for optimum performance. I'd also consider enabling TRIM.

ZFS might be worth while if you're going to use a bunch of snapshots and host VMs that you want zvols for. If that's not the case, UFS is awesome.
 

ANOKNUSA

Aspiring Daemon

Thanks: 358
Messages: 671

#11
ZFS might be worth while if you're going to use a bunch of snapshots and host VMs that you want zvols for. If that's not the case, UFS is awesome.
The data integrity features of ZFS might make it worthwhile as well, but yes, usefulness on a laptop is limited. I'd say it's worth playing around with to determine your preference, and use ZFS if you prefer it over UFS, but on a laptop there probably aren't any distinct objective advantages. One disadvantage I had to deal with was that backing up ZFS to anything other than another ZFS filesystem can be a pain, as there's no way to check the integrity of backups and no way to grab individual files from backups. Since I back up to a Linux NAS that was a deal killer for me.
 

wblock@

Administrator
Staff member
Administrator
Moderator
Developer

Thanks: 3,558
Messages: 13,856

#12
As ZFS is a copy-on-write filesystem, overwriting data completely is a big problem. And on FreeBSD there is still no ZFS crypto. ZFS on top of GELI does not solve this kind of problem.
With ZFS on geli(8), the old data is still present, but also still encrypted.

Some SSDs offer internal AES encryption (a "Self-Encrypting Drive", or SED) through the hard drive password. Like all encryption, there is a degree of trust involved, in this case trust of the SSD vendor.

ZFS will work fine on a notebook. Current notebooks have 4G or 8G, which is plenty, and ZFS memory usage can be limited anyway.
 

gkontos

Daemon

Thanks: 437
Messages: 2,069

#14
I don't get it... How hard is for someone to understand that data that are still there are still encrypted.
 

kpa

Beastie's Twin

Thanks: 1,673
Messages: 6,084

#15
I don't get it... How hard is for someone to understand that data that are still there are still encrypted.
Yes, when the disk is offline there's no practical way to snoop around what's on it other than obtaining the encryption keys. When the system is running however, all data on the encrypted provider is accessible by the superuser in unencrypted form including the contents of deleted files. It can be easily argued that deleted data should be wiped immediately (or at least with a guarantee that it won't live too long to improve performance) before the delete operation finishes to avoid accidental exposure.
 

chrbr

Aspiring Daemon

Thanks: 204
Messages: 593

#16
I think it should be no black magic to do this by some script with the steps as below:
  1. Rename the file to some uniq name to allow that a new file with a similar name can be generated in parallel by other processes
  2. Get the size of the file
  3. Overwrite it using dd(1) using /dev/random as input data
  4. Run fsync(1) to make sure it is written
  5. Delete the file.
 

getopt

Well-Known Member

Thanks: 294
Messages: 494

#17
I think it should be no black magic to do this by some script with the steps as below:
  1. Rename the file to some uniq name to allow that a new file with a similar name can be generated in parallel by other processes
  2. Get the size of the file
  3. Overwrite it using dd(1) using /dev/random as input data
  4. Run fsync(1) to make sure it is written
  5. Delete the file.
Exactly this does NOT work as expected on ZFS because it is designed as a copy-on-write filesystem. Your dd does not hit the target. It is written in addition to the disk.

Even on SSDs without ZFS this is not perfectly working. It works on spinning hard drives (with no copy-on-write filesystem) only.
 

gkontos

Daemon

Thanks: 437
Messages: 2,069

#18
Yes, when the disk is offline there's no practical way to snoop around what's on it other than obtaining the encryption keys. When the system is running however, all data on the encrypted provider is accessible by the superuser in unencrypted form including the contents of deleted files. It can be easily argued that deleted data should be wiped immediately (or at least with a guarantee that it won't live too long to improve performance) before the delete operation finishes to avoid accidental exposure.
Doesn't this apply to undeleted data as well? I don't see the point here unless we are talking about a roaming laptop with many superusers.
 

kpa

Beastie's Twin

Thanks: 1,673
Messages: 6,084

#19
Doesn't this apply to undeleted data as well? I don't see the point here unless we are talking about a roaming laptop with many superusers.
Yes it applies to any data but in particular to sensitive information that should not be stored any longer than is absolutely necessary.
 

gkontos

Daemon

Thanks: 437
Messages: 2,069

#20
Yes it applies to any data but in particular to sensitive information that should not be stored any longer than is absolutely necessary.
As much as I like your diplomatic answer, I don't understand where the problem is if this is my personal Laptop. How can anyone gain access to my deleted or "not deleted" data unless they have the password to boot?
 

kpa

Beastie's Twin

Thanks: 1,673
Messages: 6,084

#21
As much as I like your diplomatic answer, I don't understand where the problem is if this is my personal Laptop. How can anyone gain access to my deleted or "not deleted" data unless they have the password to boot?
Imagine creating some secret keys (and certicates) for example for SSL client authentication, SSH public/private key pairs or something similar. The secret keys are stored in some form of ASCII armor format so they are recognizable as secret keys just by looking at them. You store the keys on a separate medium for transfering the keys and also on a separate backup medium. You then delete the key files from your system for safety. Your next task is to upload some documents to your public web space. You select the documents for uploading and start the upload. By some crazy coincidence a hardware malfunction happens on your system and wrong diskblocks get read as your documents, those diskblocks happen to have copies of the deleted secret keys you thought you deleted but of course the OS just marked the blocks as unused. You just published your secret keys for the whole world to read.
 

gkontos

Daemon

Thanks: 437
Messages: 2,069

#22
You select the documents for uploading and start the upload. By some crazy coincidence a hardware malfunction happens on your system and wrong diskblocks get read as your documents, those diskblocks happen to have copies of the deleted secret keys you thought you deleted but of course the OS just marked the blocks as unused. You just published your secret keys for the whole world to read.
Wouldn't those be encrypted too? My full pool is encrypted. I really fail to understand this.
 

kpa

Beastie's Twin

Thanks: 1,673
Messages: 6,084

#23
Wouldn't those be encrypted too? My full pool is encrypted. I really fail to understand this.
Yes the underlying storage is encrypted but the OS will happily read the wrong blocks and decrypt them to plaintext. I should clarify that for this scenario to work the error would have to happen on the file system level in the OS, if it happened anywhere "lower" like in the block device drivers it would be caught very quickly. I believe geli(8) is designed to guard against such errors where wrong blocks get read from the disk, they would be rejected when the cryptographic MAC tests fail for the read blocks. File systems such as UFS have no such integrity checking and that's why the scenario I depicted is at least possible.
 

gkontos

Daemon

Thanks: 437
Messages: 2,069

#24
Yes the underlying storage is encrypted but the OS will happily read the wrong blocks and decrypt them to plaintext. I should clarify that for this scenario to work the error would have to happen on the filesystem level in the OS, if it happened anywhere "lower" like in the block device drivers it would be caught very quickly.
Ok, then that could also happen to Oracle Solaris ZFS with native encryption. Please see -->http://www.oracle.com/technetwork/a...e-admin/manage-zfs-encryption-1715034.html--<