Does anyone have any experience using this controller? I don't see it on the supported hardware list - it is fairly new. I'm just wondering if I should pick up a SAS2008 card or wait to see about driver support for the SAS3008.
I found a bug in one of the scripts, I'm using pcbsd-utils-1382605460.
lpreserver revertsnap calls /usr/local/share/lpreserver/backend/zfsrevertsnap.sh.
zfsrevertsnap.sh
#!/bin/sh
# ZFS functionality
# Args $1 = DATASET
# Args $2 = zfs directive...
I started using the lpreserver utility from PC-BSD for automated snapshots. I'm trying to revert a snapshot but lpreserver doesn't seem to like my syntax.
# lpreserver listsnap zroot/usr/home
NAME USED AVAIL REFER MOUNTPOINT...
Yeah, for that reason I stick to 1 TB drives. I've seen a few rebuilds fail throughout my work. That and I buy drives from two different sources because I've seen batches of drives fail withing a few weeks of each other.
I had about 1 TB of data and it rebuilt in about a half hour. All is...
It just hit me, the drive that failed was connected to port 0 on the RAID controller. I did not have any of the 4 drives set to be the boot device in the RAID BIOS so I'm assuming it just picked drive 0. I set the boot device to be one of the 3 good drives (they still have boot blocks) and it...
I have a four drive GPT RAID-Z setup. The four drives are connecting through a RAID controller, each set up as a RAID0 virtual drive. Obviously I didn't think this through all to well because I just had a drive failure and I'm running into an issue trying to get the replacement drive back in...
I ran the following to get a history of my zpool commands:
zpool history
From the output, I was able to verify that I did indeed enable 2 copies on Filesystem A at some point in time, then set copies back to 1.
Then, I ran the following script on Filesystem A and Filesystem B:
#!/bin/sh...
# zfs get copies
NAME PROPERTY VALUE SOURCE
...
zroot/usr/home copies 1 local
zroot/homebackup copies 1 default
...
zroot/usr/home is the only directory that shows a Source as "local". Does that mean I had copies on at one time?
My concern is that when I used pax to copy one directory to another, the resulting file sizes of source and dest directories differ. If you take zfs out of the picture that discrepancy should only happen if some of the files were unable to be copied. A file count on the source and destination...
I am attempting to copy all of my data from zfs Filesystem A to zfs Filesystem B (same zpool). I used the following to copy the data:
pax -p aop -rw . /dest
pax completes without error.
When I run df -h
I see 694G for Filesystem A and 461G for Filesystem B. I see similar results when using...
Now I'm very confused.
I removed the sleep 60 and it still loads sshguard during boot - while the original script failed.
#!/bin/sh
# PROVIDE: sshguard
# REQUIRE: syslogd
# KEYWORD: nojail shutdown
. /etc/rc.subr
name="sshguard"
rcvar=sshguard_enable
sig_reload="USR1"...
With sshguard version 1.5, you can start sshguard with the -l flag.
Notes from their site:
At any rate, I added a sleep 60 to the rc script and it starts up during boot without a problem. I still wish I understood the problem better.
#!/bin/sh
# PROVIDE: sshguard
# KEYWORD: nojail...
If you simply "reboot" your vm from the AWS console, the vm will keep the same internal DNS and IP. But, if you "stop" and "start" your vm, it will be assigned new internal DNS and IP.
The Private DNS (internal IP) changes when you reboot your instance. If you hard coded that IP into sshd_config as a ListenAddress, you will have an unpleasant surprise the next time to reboot your vm.
Do you have any suggestions on how to work around that? Thanks for the tutorial, btw :)
I'm also curious about this. I'm trying to learn C from a C++ background and I just had to dig up the '-std=c99' flag for gcc because it did not like how I initialized a variable in the declaration of the for loop. I was under the assumption that C99 would be more the standard.
Thanks for the replies. Yes, it does looks like it is just a difference in how tcsh and sh handle globbing.
> tcsh ./findport.sh blabla
echo: No match.
> sh ./findport.sh blabla
/usr/ports/*/*blabla*
>
I found a nice tip to search for a given port:
# echo /usr/ports/*/*portname*
If the string is found, it echos where and if not it says "echo: No match". I tried putting that into a script as follows:
//fport.sh
#!/bin/sh
echo /usr/ports/*/*$1*
If I run the script and the string is...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.