Search results

  1. peetaur

    It's all about jokes, funny pics...

    Some highly innovative howtos can be found online, such as: Defraggle your motherdisc! http://www.datadocktorn.nu/us_frag1.php
  2. peetaur

    Still trying to solve zfs / mps driver SCSI timeout and disk lost problem

    No, upgrading the firmware on them makes them work fine. The SSDs have now been running for half a year without any problems. I don't know if this works for all devices, as obviously firmware is different on different models of disks/SSDs. I don't believe mps is the only thing to blame. My...
  3. peetaur

    SNMP disk space.

    To get disk usage, I found I only needed to uncomment one line in snmpd.config: vim /etc/snmpd.config begemotSnmpdModulePath."hostres" = "/usr/lib/snmp_hostres.so" vim /etc/rc.conf bsnmpd_enable="YES" And then oddly, on some servers the SNMP description (used to look up a specific file...
  4. peetaur

    FreeBSD 8.2-RELEASE panic with zfs

    That's sound advice, but my dual 1400W PSU system is supposed to run 36 disks with one PSU, and runs fine with new firmware, and reports only 400W used right now. I still haven't had any new crashes, only hangs, which I've worked around by locking any process using "zfs" so only one can do it...
  5. peetaur

    Horrible iSCSI (istgt) performance

    Just to repeat the same thing again, you DID update QueueDepth to 64, right? Because out of anything I tested, these things are all that matters. Here are some example snippets: [Global] ... # QueueDepth is limited by this number, and I don't know if it is per connection or accross all (due to...
  6. peetaur

    Horrible iSCSI (istgt) performance

    If you are referring to my tests: I doubt that a CPU or a cheap network card would be the difference between 4.5 MB/s and 40-80 MB/s. And the test clearly showed the zil at 100% load with the FreeBSD + vbox machine. And other tests (eg. scp) always show fast enough results.
  7. peetaur

    Horrible iSCSI (istgt) performance

    I tested this again, with VERY different results. Previously the initiator machine was using FreeBSD with VirtualBox's built in initiator, and 1Gbps. And as I said before, it was only around 4.5 MB/s (writing). This time, I used a Linux machine, and it runs as fast as zvols (95-110 MB/s...
  8. peetaur

    Horrible iSCSI (istgt) performance

    One online howto made istgt segfault. :D But editing the sample config files worked fine, but no I found no documentation whatsoever.... just examples by 3rd parties. I wanted documentation so I would know what every setting meant, but found none. In addition to the no documentation, also, if...
  9. peetaur

    NFS write performance with mirrored ZIL

    I kind of think you should test your ACARD RAM with memtest, or return it and get a different one. :D And I can't wait to hear your test results with the Zeus RAM based SSD. I am tempted to buy one for my vm datastore server but first I'm testing some NFS kernel tuning stuff. See...
  10. peetaur

    NFS write performance with mirrored ZIL

    The Zeus IOPS is a flash array based SSD... I'm talking about a RAM based one.
  11. peetaur

    Horrible iSCSI (istgt) performance

    I tested istgt with a file on zfs; it went only 4.5 MB/s! A zvol went over 100. Is this what you did? (and due to zvol hangs, I'm reverting to NFS for now.)
  12. peetaur

    NFS write performance with mirrored ZIL

    Have you tried this RAM based SSD? http://www.stec-inc.com/product/zeusram.php I heard on IRC that even ESXi, the most horrible case possible, goes at least 200 MB/s over 10Gbps network with sync enabled and no server hacks to disable O_SYNC.
  13. peetaur

    nfsd not working when restarting it (as opposed to kill -HUP or reload)

    I opened this PR: http://www.freebsd.org/cgi/query-pr.cgi?pr=168942
  14. peetaur

    move ZFS installation to another hdd (with minor changes)

    From what I've read, it "should" be created and just boot slower, and would be on boot with Solaris, but FreeBSD at the time didn't support it in the bootloader, and relied on the .cache file. I have no idea if this is still true. I haven't tested it since 8.2-RELEASE.
  15. peetaur

    iSCSI istgt errors

    I had this same issue, and ultra slow write speed, but it was fixed by uncommenting and changing QueueDepth to 64 on the [LogicalUnit1] part of istgt.conf. As seen commented out on line 122 in the OP here: http://forums.freebsd.org/showthread.php?t=22675 And I think this thread should be...
  16. peetaur

    Horrible iSCSI (istgt) performance

    The solution, as stated by a few, is to set QueueDepth (I found 64 works well, but only tested the VirtualBox initiator). Why is this thread not marked solved? Is it because you wanted to know why the different initiators will perform well on the same default unset QueueDepth? Or did different...
  17. peetaur

    Labels "disappear" after zpool import

    Hah I was just going to post what HarryE posted. ... only one day later. But since you want the non-gpt labels, it would be more like: zpool import -d /dev/label/ yourpool And btw your script looks nice, but you could also try: zpool labelclear /dev/da0 This works fine for me, but if you put...
  18. peetaur

    It's all about jokes, funny pics...

    Here's a classic more extreme version of that: http://www.youtube.com/watch?v=KUFkb0d1kbU
  19. peetaur

    ZFS panic the system

    I wouldn't word it like that. I think it is very production ready, you get various bugs here and there, depending on your specific use case. For example, lately it was discovered that (with 9.0 or the new NFS server, which I don't use) when removing very many files on an NFS share backed by ZFS...
  20. peetaur

    Windows 7 RC

    But does that make it okay? What you said is basically a true statement, since it simply doesn't follow the standard, rather than violating it. But it is still an unfriendly concept. If I'm not using Outlook, and I get a winmail.dat attachment, what am I supposed to do with it? It is ridiculous...
Back
Top