I have a Lenovo M700. Despite its very small size, it's a powerful system -- fast Intel processor, 8GB RAM and a 128GB SSD. I was running Arch Linux on this system and experienced one of the risks of a rolling release system -- the Linux kernel guys broke the network driver for this system (E1000). I used this as an excuse to install FreeBSD 11.1, after backing up the Linux install. The FreeBSD install I did was ZFS, legacy GPT.
I have a suite of financial management software that I wrote for myself. My financial data is stored in a Sqlite database. One of the suite's components is a report generator that is written in C (I also have Rust version -- I am gradually porting some of the components to Rust). The report generator is compute-intensive and I believe most of the time is spent in Sqlite, not in my code. There are two parts to this that can be run in parallel and those are run in separate threads.
With Arch running on this hardware, the elapsed runtime for the report generator is about 40 seconds for both the C and Rust versions. The runtime on FreeBSD is about 3x longer, a bit over 120 seconds. In the Rust case, the compiler versions were identical. The gcc compiler on FreeBSD is older than on Arch (gcc5 vs. gcc7) but I don't believe that Rust uses gcc as a back-end. If I'm right, that would eliminate the gcc version difference as the culprit, given that I saw same timings with Rust. The sqlite versions are the same (1.21). I'm guessing this is an ext4 vs. zfs issue. The report generator only reads the database.
I also had difficulty with X on this machine (it has Intel graphics hardware and yes, I installed xf86-video-intel). Instead of using the Intel driver, it used the Vesa driver and the graphics performance was awful. I tried X -configure to generate a usable xorg.conf to try to force the use of the Intel driver, but that failed.
I ended up restoring Arch because of these problems. But if someone has a bright idea about the performance issue, I have another machine with which i might be able to do some useful experimenting.
I have a suite of financial management software that I wrote for myself. My financial data is stored in a Sqlite database. One of the suite's components is a report generator that is written in C (I also have Rust version -- I am gradually porting some of the components to Rust). The report generator is compute-intensive and I believe most of the time is spent in Sqlite, not in my code. There are two parts to this that can be run in parallel and those are run in separate threads.
With Arch running on this hardware, the elapsed runtime for the report generator is about 40 seconds for both the C and Rust versions. The runtime on FreeBSD is about 3x longer, a bit over 120 seconds. In the Rust case, the compiler versions were identical. The gcc compiler on FreeBSD is older than on Arch (gcc5 vs. gcc7) but I don't believe that Rust uses gcc as a back-end. If I'm right, that would eliminate the gcc version difference as the culprit, given that I saw same timings with Rust. The sqlite versions are the same (1.21). I'm guessing this is an ext4 vs. zfs issue. The report generator only reads the database.
I also had difficulty with X on this machine (it has Intel graphics hardware and yes, I installed xf86-video-intel). Instead of using the Intel driver, it used the Vesa driver and the graphics performance was awful. I tried X -configure to generate a usable xorg.conf to try to force the use of the Intel driver, but that failed.
I ended up restoring Arch because of these problems. But if someone has a bright idea about the performance issue, I have another machine with which i might be able to do some useful experimenting.