To what phoenix mentioned, I would like to add that you can build an "write optimized SSD" with MLC and "read optimized SSD" with SLC -- it depends on how you organize the I/O paths and priorities internally. It is not the read/write performance why you are unlikely to see SLC in commodity USB token and MLC in enterprise products.
(although, greed is what moves this civilization)
There are flash devices, that contain twice or more times flash storage than the advertised capacity. This allows for the device to write at 'full speed', because another thread in the controller erases blocks that you just overwrote with data, ready for the next writes.
There are all sorts of 'magic' that can be done to semiconductor things, but one is sure, we cannot (yet) override the laws of physics which basically say, that an SLC cells stores one voltage value (1 bit), while an MLC cell stores more states (two or three bits). In fact, the state in MLC flash is complex and lots more unstable (in physical or rather electrical sense). For the same technological base, or technology, SLC flash will always be better than MLC flash in all other aspects, but storage density.
Back to the topic.
The SLOG is "write always, read only on recovery". From performance perspective, this means you need write optimized storage. As mentioned many times before, it does not have to be SSD. Writes are pretty much sequential, so this does not rule out HDDs. The volume of data that is written to SLOG is insignificant. ZFS v23 reduces the ZIL even further. The SLOG does not need to be larger than few gigabytes usually. Only synchronous writes go trough the SLOG -- it does not mean you need 900MB/s write to SLOG if you want to write 900MB/s to the pool. It only has to be separate from the main storage pool. This is in order to not waste IOPs in the main pool, to not move heads and to not have to free the in-pool ZIL records (that also happen to be variable size, leading to fragmentation). The only requirement for the SLOG is to survive server crash while preserving what was supposedly wrote there. Most such crashes happen at power failure -- so it must be able to survive power failure, not matter what.
The best candidate for an SLOG is battery backed RAM. Without any doubt.
Next best candidates, in no particular order are small
enterprise grade, low-latency disk drive, SLC flash or MLC flash. Of course, FLASH should have capacitor or battery backup. Latency is significant factor for the SLOG (and for the L2ARC by the way, after read performance). Write latency is not something that any FLASH device is proud with (but, see above there are exceptions, for purpose built devices).
To the original question: is it wise to mix SLOG and L2ARC on the same device?
If the device is of sufficiently recent generation, with sufficiently many write/read IOPs perhaps yes. It all depends on the application. The write operations will 'choke' most cheap flash drives, leaving nothing for the read portion, so the L2ARC will suffer. Or the large writes the L2ARC does, will impact the SLOG response time. This will only happen after certain threshold is reached, that is, when you reach sufficient number of sync operations, from database activity or file/directory creation etc. Unfortunately, NFS is one application where everything is sync operation when you write from the client. In most installations this may never happen. It's best to experiment -- observe the drive load with gstat etc.
PS: sub_mesa, I do not have pointer handy about HDDs writing garbage when power is lost. Speaking of experience and an shelf full of dead drives
But Google is our friend
Enterprise drives take special measures against this -- by providing large capacitors and special mechanics with the sole purpose to lift the heads away from the surface should power to the drive be lost or there is power fluctuation -- to prevent the heads from emitting random garbage to the platters. This is typically not the case with cheap desktop drives, although most do take some measures even if not that aggressive. There is lots of stories in drive handling, such as low level (without today's quotes) formatting the drive to create new sector marks etc .. after the old marks being lost.. somehow
But luckily, this is more or less history.