Useful scripts

There's probably a nice base system tool for this, and definitely a better way to script this, but I wrote a little script to get information about ports from the ports tree (dependencies, compile time options, pkg-descriptions, etc.)

I hope someone will find it useful!

I'm just beginning to learn shell scripting, so any suggestions, comments, etc, are very welcome.

Code:
#!/bin/sh

###########################################################################
# This script will allow user to search the ports tree and display port 
# categories, port names, port descriptions and dependency information.
###########################################################################


# set ports tree directory
pdir="/usr/ports"


# This is the help function which is called in case of wrong usage or the -h option.
help() { 
    echo "usage:
    `basename $0` [-a "string"] - print name, path, info, and dependencies for given string
    `basename $0` [-b port-name] - print build dependencies for given port
    `basename $0` [-d port-name] - print ALL dependencies for given port(s) recursively
    `basename $0` [-h] - print this help
    `basename $0` [-c category-name] - list category contents
    `basename $0` [-C] - list all categories in ports
    `basename $0` [-m port-name] - print missing dependencies for given port
    `basename $0` [-n port-name] - display pkg-descr contents for given port
    `basename $0` [-o port-name] - print compile-time options for given port
    `basename $0` [-s "string"] - search for given port
    `basename $0` [-S "string"] - search all "pkg-descr" files for given string
    `basename $0` [-r port-name] - print runtime dependencies for given port"
    } 

# This is what is run if no command line argument is given
if [ -z "$1" ]
then
    help
    exit 1
fi

# These are the main commands to be run for a given option.
while getopts "a:b:c:Cd:hm:n:o:s:S:r:" opt; do
      case "$opt" in
        a) cd $pdir && make search name=$2 ;;
        b) make -C $pdir/*/$2 build-depends-list | sed s:$pdir/:: | sort -d | column ;;
        c) ls -d $pdir/$2/*/ | sed s:$pdir/$2/:: | column ;;
        C) ls -d $pdir/*/ | sed s:$pdir/:: | column ;;
        d) make -C $pdir/*/$2 all-depends-list | sed s:$pdir/:: | sort -d | column ;;
        h) help ;; 
        m) make -C $pdir/*/$2 missing | sed s:$pdir/:: | sort -d | column ;;
        n) more $pdir/*/$2/pkg-descr ;;
        o) make -C $pdir/*/$2 showconfig ;;
        s) find $pdir/* -maxdepth 1 -iname *$2* | sed s:$pdir/:: | sort -d | column ;;
        S) for i in $pdir/*/*/pkg-descr; do grep -il "$2" $i | xargs dirname | sed s:$pdir/:: ; done ;; 
        r) make -C $pdir/*/$2 run-depends-list | sed s:$pdir/:: | sort -d | column ;;
        \?) echo "Type "`basename $0` -h" for help." >&2 ;;
      esac
done
 
nickednamed said:
Code:
help() { 
    echo "usage:
    `basename $0` [-a "string"] - print name, path, info, and dependencies for given string
    `basename $0` [-b port-name] - print build dependencies for given port
    `basename $0` [-d port-name] - print ALL dependencies for given port(s) recursively
    `basename $0` [-h] - print this help
    `basename $0` [-c category-name] - list category contents
    `basename $0` [-C] - list all categories in ports
    `basename $0` [-m port-name] - print missing dependencies for given port
    `basename $0` [-n port-name] - display pkg-descr contents for given port
    `basename $0` [-o port-name] - print compile-time options for given port
    `basename $0` [-s "string"] - search for given port
    `basename $0` [-S "string"] - search all "pkg-descr" files for given string
    `basename $0` [-r port-name] - print runtime dependencies for given port"
}

I would only change that above into that below:

Code:
help() {
  NAME="$( basename $0 )"
  echo "usage:
    ${NAME} [-a "string"]        print name, path, info, and dependencies for given string
    ${NAME} [-b port-name]       print build dependencies for given port
    ${NAME} [-d port-name]       print ALL dependencies for given port(s) recursively
    ${NAME} [-h]                 print this help
    ${NAME} [-c category-name]   list category contents
    ${NAME} [-C]                 list all categories in ports
    ${NAME} [-m port-name]       print missing dependencies for given port
    ${NAME} [-n port-name]       display pkg-descr contents for given port
    ${NAME} [-o port-name]       print compile-time options for given port
    ${NAME} [-s "string"]        search for given port
    ${NAME} [-S "string"]        search all "pkg-descr" files for given string
    ${NAME} [-r port-name]       print runtime dependencies for given port"
}
 
Dzięki/Thanks.

Yes, that seems stupid running /usr/bin/basename twelve times instead of one. I don't know why I didn't see that earlier.
 
Something for the Linux users

Now, apologies up front if someone else has done this one before; I'll have to admit that I did not go over the entire thread just yet.

And so I used cut this evening, was going to try and help out another user of this forum:

Code:
$ fstat -f /home | cut -d ' ' -f3


(extra lines cut)
Not exactly what I expected. And yes, I know that in some cases the behaviour can be as intended:

Code:
$ ls -l ~ | cut -d ' ' -f3

6
2
2
6
1
Or does it? Why can't I grab column 2 for example:

Code:
$ ls -l ~ | cut -d ' ' -f2
276


(extra lines cut)
Sometimes it works, and sometimes it doesn't. There is a good reason for it, but I'm spoiled and used to GNU/cut.

Enter ccut or Column Cut:

Code:
#!/bin/sh

## Column Cut; grab a specific column as can be done with GNU/cut.

if [ "$1/" == "/" ]; then
        echo "Usage:"                                   > /dev/stderr
        echo "`basename $0` <column number>"            > /dev/stderr
        echo                                            > /dev/stderr
        exit 1;
fi

if [ $1 -eq $1 ] 2> /dev/null; then
        sed -E 's/[[:space:]]+/ /g' | cut -d ' ' -f $1;
else
        echo "Error: please specify a column number."   > /dev/stderr
        echo                                            > /dev/stderr
        exit 1;
fi

So, getting column 2 in the above example:

Code:
$ ls -l ~ | /home/peter/bin/ccut 2
276
6
2
2
6
1
(extra lines cut)

Hope you'll enjoy.
 
ShelLuser said:
Now, apologies up front if someone else has done this one before; I'll have to admit that I did not go over the entire thread just yet.

And so I used cut this evening, was going to try and help out another user of this forum:

Code:
$ fstat -f /home | cut -d ' ' -f3


(extra lines cut)
You should try this
Code:
fstat -f /home | cut -w -f3
 
[ By the way, a post a few posts after this one, references a method that works better, though I've not the time to include it here; it is in the other thread that I mention in the next post. ]

If one has a file containing a list of reinstalls due (one of the language port major version bumps, for example). One can use gcat thusly. I don't remember the procedure entirely (it is one of several).
Code:
p5-AnyData-0.11  (the file, [del]freebsd[/del] FreeBSD [FILE]cat[/FILE] does not work as well)

for i in $( ` gcat file ` ); do ( portmaster -d -B -i -g -P $i); done
worked for a while, and was tedious. The following I am following it up with, and is more in line of the command I usually use.
Code:
grep p5 file0806 | grep -v corru | head -40 | awk '{print $1}' | xargs -J % portmaster -d -B -P -i -g -x gcc-4.7.4.20130831 -x perl-5.14.4 % && yell || yell
 
jalla said:
You should try this
Code:
fstat -f /home | cut -w -f3
Thanks for the suggestion but it seems you're using a different version of FreeBSD than I am. According to the cut(1) manualpage the -w parameter doesn't exist. Quite frankly I also don't recall this to exist on the cut as it's used in Linux.

As such I'll rely on my script instead for now ;)
 
Manage ZFS snapshots

Although the zfs(8) manual page contains an example to setup a rolling snapshot scheme I'm not much of a believer because I like to keep "filesystem management tasks" to a minimum. As such I'm not much of a fan of renaming the whole lot of snapshots everyday, even though it could be perfectly safe.

I do like to maintain a series of snapshots though, and eventually came up with the script you see here. I figured I might as well share:

Code:
#!/bin/sh

## Snapshot.ZFS v1.0
##
## A script which will manage ZFS snapshots on the
## filesystem(s) of your choosing.

### Configuration section.

# ZFS pool to use.
POOL="zroot";

# Filesystem(s) to use.
FS="/home /usr/local /var"

# Retention; how many snapshots should be kept?
RETENTION=7

# Recursive; process a filesystem and all it's children?
RECURSE=yes;

### Script definitions <-> ** Don't change anything below this line! **

CURDAT=$(date "+%d%m%y");
PRVDAT=$(date -v-${RETENTION}d "+%d%m%y");
PROG=$(basename $0);

if [ ${RECURSE} == "yes" ]; then
        OPTS="-r";
fi

### Script starts here ###

if [ "$1/" == "/" ]; then
        echo "Error: no command specified."             > /dev/stderr;
        echo                                            > /dev/stderr;
        echo "Usage:"                                   > /dev/stderr;
        echo "${PROG} y : Manage snapshots."            > /dev/stderr;
        echo                                            > /dev/stderr;
        exit 1;
fi

if [ "$1" == "y" ]; then

        # Make & clean snapshot(s)
        for a in $FS; do
                ZFS=$(zfs list -r ${POOL} | grep -e "${a}$" | cut -d ' ' -f1);
                if [ "$ZFS/" == "/" ]; then
                        echo "${PROG}: Can't process ${a}: not a ZFS filesystem." >/dev/stderr;
                else
                        $(zfs snapshot ${OPTS} ${ZFS}@${CURDAT} > /dev/null 2>&1) || echo "${PROG}: Error creating snapshot ${ZFS}@${CURDAT}" > /dev/stderr
                        $(zfs destroy ${OPTS} ${ZFS}@${PRVDAT} > /dev/null 2>&1) || echo "${PROG}: Error destroying snapshot ${ZFS}@${PRVDAT}" > /dev/stderr
                fi
        done;
else
        echo "Error: wrong parameter used."             > /dev/stderr;
        echo                                            > /dev/stderr;
        echo "Usage:"                                   > /dev/stderr;
        echo "${PROG} y : Manage snapshots."            > /dev/stderr;
        echo                                            > /dev/stderr;
        exit 1;
fi
Configuring this script is pretty simple; you need to specify which ZFS pool to use (usually you only have one), then the filesystems which you want to manage, how many snapshots (days) to retain (keep in mind that my script was made with the intention of running it daily) and finally if you want recursive snapshots.

A little care should be taken, but the script does some checks itself, like determining the file system name based on the mount point. And it will also warn you if the directory you specified isn't a valid ZFS filesystem. What it doesn't do is check if the filesystem to destroy actually exists.

Alas, hope someone can find this useful.
 
ShelLuser said:
Thanks for the suggestion but it seems you're using a different version of FreeBSD than I am. According to the cut(1) manualpage the -w parameter doesn't exist. Quite frankly I also don't recall this to exist on the cut as it's used in Linux.

As such I'll rely on my script instead for now ;)

You're right. I didn't know it was that recent, but you'd have to run -STABLE (8 or 9) to have that option. (and perhaps to state the obvious, -w makes cut treat any number of whitespace as delimiter).
 
Thought I'd share a small script that does absolutely nothing ;)

Rather, it's a skeleton that I use as a starting point for every script I write. Beginning scripters may find some useful ideas here.

Code:
#!/bin/sh

# Desc: generic script skeleton

# defaults
x=""

print_usage () {
	echo "usage: $0 [-a aopt] [-htv]"
}

print_help () {
	print_usage
	echo " -a aopt
 -t     testrun - list actions but don't execute (implies '-v')
 -v     verbose
"
}

# print if verbose
pif () {
  test -z "$verbose" || echo "$*"
}

# run external cmds with "rc cmd arg ..."
# honours verbose/testrun options
# NB! special characters passed to this routine must be escaped carefully
rc () {
  test -z "$verbose" || echo "$*" 1>&2
  test -z "$testrun" || return
  eval "$@"
}

trap "exit 1" 15

while test -n "$1"
do
	case  $1 in
		-a) aopt=$2; shift;;
		-h) print_help;exit;;
		-t) testrun=1;verbose=1;;
		-v) verbose=1;;
		 *) print_usage;exit;;
	esac
	shift
done

pif Starting script $0
 
jb_fvwm2 said:
If one has a file containing a list of reinstalls due (one of the language port major version bumps, for example) ... one can use gcat thusly... don't remember the procedure entirely (it is one of several)...
Code:
p5-AnyData-0.11  (the file, freebsd [FILE]cat[/FILE] does not work as well )

for i in $( ` gcat file ` ); do ( portmaster -d -B -i -g -P $i); done
... worked for a while, and was tedious. The following I am following it up with, and is more in line of the command I usually use...
Code:
grep p5 file0806 | grep -v corru | head -40 | awk '{print $1}' | xargs -J % portmaster -d -B -P -i -g -x gcc-4.7.4.20130831 -x perl-5.14.4 % && yell || yell

Freely ignore that post: I have a one-liner running now that is working so well, in this particular upgrade, (f one has /var/db/pkg as one's package database still that I posted it in the other thread ongoing presently ("FreeBSD vs Linux: 10 points...") and to the freebsd-ports list so that someone already using pkg (which ?) maybe can craft an equivalent CLI that works as well.
 
A simple internet streaming radio tuner

A simple Internet streaming radio tuner shell script for listening to radio station streams with mplayer. It should work on any Linux or FreeBSD box with bash. Resize your terminal if you keep a big list. I've already filled the list with some examples for everyone. Remove what you want, put your own in. It doesn't require X. These streams are listed openly so I don't believe they violate any TOS.

Code:
#! /usr/bin/env bash

# A simple script to listen to radio stations online with mplayer.
# If you wish to use a different media player like vlc, ffplay, mpv etc., modify the script.
# It will hold as many stations as you want. You'll need to resize your terminal larger
# to see all of them. Add or remove stations as you wish.

# Add or Remove Stations here, use same syntax.
# Station identifier and stream
STREAMLIST='
QUIT
BBC1| -playlist http://bbc.co.uk/radio/listen/live/r1.asx
BBC1X| -playlist http://bbc.co.uk/radio/listen/live/r1x.asx
BBC2| -playlist http://bbc.co.uk/radio/listen/live/r2.asx
BBC3| -playlist http://bbc.co.uk/radio/listen/live/r3.asx
BBC4| -playlist http://bbc.co.uk/radio/listen/live/r4.asx
BBC4X| -playlist http://bbc.co.uk/radio/listen/live/r4x.asx
BBC5| -playlist http://www.bbc.co.uk/radio/listen/live/r5lsp_aaclca.pls
BBC6| -playlist http://bbc.co.uk/radio/listen/live/r6.asx
BBC_Asia| -playlist http://bbc.co.uk/radio/listen/live/ran.asx
BBC_World| -playlist http://www.bbc.co.uk/worldservice/meta/tx/nb/live/eneuk.asx
London_Heart| -playlist http://media-ice.musicradio.com/HeartLondonMP3.m3u
Nachrichten| http://ondemand-mp3.dradio.de/file/dradio/nachrichten/nachrichten.mp3
Nl_R1| -playlist http://icecast.omroep.nl/radio1-sb-mp3.m3u
Nl_R2| -playlist http://icecast.omroep.nl/radio2-sb-mp3.m3u
Nl_R5| -playlist http://icecast.omroep.nl/radio5-sb-mp3.m3u
Dk_P1| -playlist http://live-icy.gss.dr.dk:8000/A/A03L.mp3.m3u
Dk_P3| -playlist http://live-icy.gss.dr.dk:8000/A/A05L.mp3.m3u
Dk_P5| -playlist http://live-icy.gss.dr.dk:8000/A/A25L.mp3.m3u
Fr_Info| -playlist http://www.listenlive.eu/franceinfo.m3u
Fr_Chante| -playlist http://stream1.chantefrance.com/Chante_France.m3u
Fr_Radio6| -playlist http://91.121.112.215:82/xstream.m3u
De_Bayern| -playlist http://www.antenne.de/webradio/channels/info.m3u
De_Wissen| -playlist http://www.dradio.de/streaming/dradiowissen_lq_mp3.m3u
De_NDR| -playlist http://www.ndr.de/resources/metadaten/audio/m3u/ndr903.m3u
It_RAI| -playlist http://212.162.68.230/1.mp3.m3u
It_24| -playlist http://shoutcast.radio24.it:8000/listen.pls
It_Gal| -playlist http://85.47.51.98:8000/live.m3u
Esp_Nat| -playlist http://radio1.rtve.stream.flumotion.com/rtve/radio1.mp3.m3u
Esp_3| -playlist http://radio3.rtve.stream.flumotion.com/rtve/radio3.mp3.m3u
Ru_Mayak| http://live.rfn.ru/radiomayak_fm
Ru_Cty| -playlist http://79.143.70.114:8000/cityfm-64k.aac.m3u
VietNam_Pub| -playlist http://yp.shoutcast.com/sbin/tunein-station.pls?id=1548734
Jp_A| -playlist http://yp.shoutcast.com/sbin/tunein-station.pls?id=9495227
Jakarta| -playlist http://yp.shoutcast.com/sbin/tunein-station.pls?id=110954
Persian| -playlist http://yp.shoutcast.com/sbin/tunein-station.pls?id=616871
Israla| -playlist http://yp.shoutcast.com/sbin/tunein-station.pls?id=192075
InfoWars| -playlist http://www.infowars.com/infowars.asx
NPR| mms://a1671.l2063252432.c20632.g.lm.akamaistream.net/D/1671/20632/v0001/reflector:52432?
KMOX_St_Louis| http://208.80.54.57/KMOXAMAAC?
WLS_Chicago| -playlist http://provisioning.streamtheworld.com/pls/WLSAMAAC.pls
KLIF_Dallas| -playlist http://provisioning.streamtheworld.com/pls/KLIFAMAAC.pls
KFAR_Fairbanks| -playlist http://out2.cmn.icy.abacast.com/kfaram-kfaramaac-64.m3u
KGUM_Guam| -playlist http://ice2.securenetsystems.net/KGUM2.m3u
WKAQ_Puerto_Rico| -playlist http://provisioning.streamtheworld.com/pls/WKAQAMAAC.pls
KOKC_Ok_city| -playlist http://out2.cmn.icy.abacast.com/kokc-kokcamaac-64.m3u
KMBZ_Kansas_City| -playlist http://provisioning.streamtheworld.com/pls/KMBZAMAAC.pls
WOWO_Ft_Wayne| -playlist http://asx.abacast.com/federatedmedia-wowoam-32.pls
WIND_Chicago| -playlist http://provisioning.streamtheworld.com/pls/WINDAMAAC.pl 
KWQW_Des_Moines| -playlist http://provisioning.streamtheworld.com/pls/KWQWFMAAC.pls
WGN_Chicago| http://5483.live.streamtheworld.com:80/WGNAM_SC
WMBD_Peoria| -playlist http://wmbd.serverroom.us:7048/listen.pls
WSOY_Decatur| -playlist http://in.icy2.abacast.com/neuhoff-wsoyam-32.m3u
WJBC_Bloomington| -playlist http://provisioning.streamtheworld.com/pls/WJBCAMAAC.pls
KCRW_Santa_Monica| -playlist http://media.kcrw.com/live/kcrwlive.pls
FTR| http://rs4.radiostreamer.com:9110/
OTR| -playlist http://www.otrfan.com:8000/stream.m3u
WLUJ_Springfield| http://wluj.streamon.fm/stream/WLUJ-24k.aac
KJSL-St_Louis| -playlist http://den-a.plr.liquidcompass.net/pls/KJSLAMAAC.pls
VOA| -playlist http://mfile.akamai.com/2110/live/reflector:56822.asx
Bluegrass| http://173.244.215.163:8490
NRK| -playlist http://lyd.nrk.no/nrk_radio_folkemusikk_mp3_l.m3u
Copenhagen| -playlist http://onair.100fmlive.dk/klassisk_live.mp3.m3u
Sophia| -playlist http://live.btvradio.bg/classic-fm.mp3.m3u
Helsinki| -playlist http://klasu.iradio.fi:8000/klasu-med.mp3.m3u
Stockholm| -playlist http://sverigesradio.se/topsy/direkt/2562-hi-mp3.pls
Swiss_Folk| http://50.7.234.130:8188	
BR_Klassic| -playlist http://streams.br-online.de/br-klassik_1.m3u
Bayern_Folk| -playlist http://streams.br-online.de/bayernplus_1.m3u
Hamburg| -playlist http://edge.live.mp3.mdn.newmedia.nacamar.net/klassikradio128/livestream.mp3.m3u
Berlin| -playlist http://www.kulturradio.de/live.m3u	
Warsaw| -playlist http://zetclassic-02.eurozet.pl:8100/listen.pls
Krakow| -playlist http://www.miastomuzyki.pl/n/rmfclassic.pls
Polska| http://91.121.89.153:4000
Bucharest| -playlist http://stream2.srr.ro:8020/listen.pls
Bratislava| -playlist http://live.slovakradio.sk:8000/Klasika_128.mp3.m3u
Czech| -playlist http://www.play.cz/radio/cro3-128.mp3.m3u
Irish_Fav| http://173.213.97.110:8242
Irish_Folk| http://95.211.76.204:8000
Zelengrad| http://108.166.161.206:8750
Portugal| http://188.138.16.143:8290
Peru_Folk| http://108.163.250.180:8020
Bolivianisima| http://67.212.179.132:9900
Paros_FM| http://85.17.121.103:8098
'
# Don't delete the ' above this line.

clear
echo "Select a stream to start, Press q to stop stream"
select STREAM in `echo "$STREAMLIST" | cut -d "|" -f1`; do
if [[ "$STREAM" == "QUIT" ]]; then 
clear
exit
fi
GETURL=`echo "$STREAMLIST" | grep -w -m1 "^$STREAM" | cut -d "|" -f2`
if [[ -n "$GETURL" ]]; then
eval mplayer "$GETURL" &> /dev/null
fi
echo "Select another stream, Press Enter to see List, or 1 to Quit".
done

Get station streams here. Or wherever you wish.
 
Perl: Reformat bonnie++ results for BB Code

I've been doing a lot of bonnie++ benchmark runs. It is a very handy program, but I find the output (both the human-readable text and the CSV line) busy and somewhat difficult to read. I've also been running replicates to get a sense of variation in the performance, and wanted to automate processing of averages and standard deviation. I've written a Perl script that extracts Read, Write and Rewrite IO data from one or more bonnie runs and summarizes them in a compact manner. The default output is BB Code, but it will also generate TSV, CSV and TiddlyWiki markup (in case anyone else uses TW).

For example, these data:
Code:
1.96,1.96,RAID-Z3x8,1,1382826096,100G,,,,335851,75,243206,64,,,719959,83,124.7,11,1,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,,307ms,1686ms,,198ms,784ms,88us,130us,133us,116us,41us,236us
1.96,1.96,RAID-Z3x8,1,1382826905,100G,,,,334785,75,243253,64,,,729968,84,133.8,5,1,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,,777ms,2188ms,,167ms,813ms,87us,127us,136us,76us,41us,69us
1.96,1.96,RAID-Z3x8,1,1382827808,100G,,,,334219,75,247885,65,,,723738,83,126.9,5,1,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,,810ms,1462ms,,267ms,871ms,103us,129us,134us,75us,41us,67us
1.96,1.96,RAID-Z3x8,1,1382833099,100G,,,,337043,75,232984,61,,,721114,83,128.2,4,1,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,,300ms,2660ms,,273ms,981ms,102us,131us,149us,82us,41us,91490us
1.96,1.96,RAID-Z3x8,1,1382834158,100G,,,,330744,74,209210,54,,,510749,57,125.7,12,1,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,,1473ms,2512ms,,470ms,784ms,85us,128us,139us,88us,41us,68us
1.96,1.96,RAID-Z3x8,1,1382830980,100G,,,,332169,75,217242,57,,,711728,81,125.1,11,1,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,,1029ms,1994ms,,183ms,1281ms,83us,133us,138us,78us,41us,68us
... if stored in a file bonnie.txt and processed with ./formatBonnie.pl bonnie.txt will generate:

bonnie++ [size=-2]v1.96[/size] RAID-Z3x8 100G N=6
Read=670[size=-2]±77[/size] Write=330 Rewrite=230[size=-2]±14[/size] [size=-2](MB/sec)[/size] Latency: 260,780,2100 [size=-2](ms)[/size]

(Using a PHP block for markup since Perl formatting does not seem to be supported in the forum)
Code:
#!/usr/bin/perl -w

# Copyright Charles Tilford, 2013
# May be freely used or modified under the Perl Artistic License 2.0

# Code to help simplify bonnie++ benchmark output, including BB formatting

use strict;

my %params;
my $colMap = {
    Version        => 0,
    Machine        => 2,
    Concurrency    => 3,
    FileSize       => 5,
    Write          => 9,
    WriteCPU       => 10,
    Rewrite        => 11,
    RewriteCPU     => 12,
    Read           => 15,
    ReadCPU        => 16,
    WriteLatency   => 37,
    RewriteLatency => 38,
    ReadLatency    => 40,
};
my $mTok = {
   Read    => 'R',
   Write   => 'W',
   Rewrite => 'RW',
};
my $mCol = {
   Read    => 'green',
   Write   => 'blue',
   Rewrite => 'purple',
};
my $twBgCol = {
   Read    => '#9f9',
   Write   => '#f9f',
   Rewrite => '#ff9',
};
my $timeMap = {
    'ns' => 0.000001,
    'us' => 0.001,
    'ms' => 1,
    's'  => 1000,
};

my $results;
my $debug     = "";
my $precision = 2;
my $pm        = '±';
my $sdCmpFmt  = "[/b] [size=-2]$pm%s[/size][b]";
my $sdFmt     = "[size=-2]$pm%s[/size]";
my $l10       = log(10);
my @common    = qw(Version Machine Concurrency FileSize);
my %metricH   = %{$colMap}; map { delete $metricH{$_} } @common;
my @metrics   = sort keys %metricH;
my @main      = qw(Read Write Rewrite);
my %isTime    = map { $_ .'Latency' => 1 } @main;
my %isRate    = map { $_  => 1 } @main;

my $twFmt     = '| !%s| %s | %s |';
for my $z (1..3) { map { $twFmt .= sprintf("backgroundColor:%s; %%s |",
                                           $twBgCol->{$_}) } @main }
$twFmt .= "\n";

&parse();

unless ($results) {
    warn <<HELP;

This script is designed to reformat the output from the disk
benchmarking utility Bonnie++. It can be provided with human-readable
output, or the compact comma-separated line. It will read directly
from the command line, or from one or more files. Multiple files or
bonnie strings can be passed.

Several formats are supported, including:

 BB  (Default) Use forum BB code
 TW  TiddlyWiki tabular format
 TSV Tab-separated values
 CSV Comma-separated values

To change the format, just include one of those codes on the command line:

  formatBonnie.pl bonnieOut1.txt bonnieOut2.txt TSV

HELP

exit;
}

&format();

warn "\n$debug\n" if ($debug);

sub format {
    if ($params{TW}) {
        print "| !Machine| !Mem | !N | ".join(' | ', map { "!$_" } @main).
            " |>|>| !${pm}%~StdDev |>|>| !Latency (ms) |\n";
    }
    foreach my $machine (sort keys %{$results}) {
        my $data = $results->{$machine};
        my @reps = @{$data->{raw}};
        $data->{N} = $#reps + 1;
        # Common metrics, like machine name and version:
        map { $data->{$_} = $data->{raw}[0]{$_} } @common;

        foreach my $key (@metrics) {
            # Map over average and StdDev for metrics
            my @vals = map { $_->{$key}  } @reps;
            $data->{$key} = [ &avg_stdv( \@vals ) ];
        }
        if ($params{TW}) {
            &format_tiddlywiki( $data );
        } elsif ($params{TSV}) {
            &format_tsv( $data );
        } elsif ($params{CSV}) {
            &format_csv( $data );
        } else {
            &format_bb( $data );
        }
    }
}

sub format_bb {
    my $data = shift;
    my $txt = "";
    $txt .= sprintf("[url=\"http://www.freshports.org/benchmarks/bonnie++\"][cmd]bonnie++[/cmd][/url] [size=-2]v%s[/size] [b]%s[/b] [i]%s", $data->{Version},
                    $data->{Machine}, $data->{FileSize});

    $txt .= sprintf(" N=%d", $data->{N}) if ($data->{N} > 1);
    $txt .= "[/i]\n";
    my (@lats);
    foreach my $key (qw(Read Write Rewrite)) {
        my ($avg, $sd) = map { int(0.5 + $_) } @{$data->{$key}};
        $txt .= sprintf(" %s=[color=green][b]%s[/b]", $key, $avg);
        $txt .= sprintf($sdFmt, $sd) if ($sd);
        $txt .= "[/color]";
        my ($lavg, $lsd) = @{$data->{$key.'Latency'}};
        push @lats, $lavg;
    }
    $txt .= " [size=-2](MB/sec)[/size] Latency: ".join(',', @lats)." [size=-2](ms)[/size]\n";
    print $txt;
}

sub format_bb_compact {

    # Not exposed - it was fairly garish

    my $data = shift;
    my $txt = "";
    $txt .= sprintf("Bonnie++ %s [b]%s[/b] [i]%s[/i]", $data->{Version},
                    $data->{Machine}, $data->{FileSize});

    $txt .= sprintf(" N=%d", $data->{N}) if ($data->{N} > 1);
    $txt .= "\n";
    my (@mkys, @mets, @lats);
    foreach my $key (qw(Read Write Rewrite)) {
        my ($avg, $sd) = map { int(0.5 + $_) } @{$data->{$key}};
        my $fmt = sprintf("[color=%s]%%s[/color]", $mCol->{$key});
        push @mkys, sprintf($fmt, $mTok->{$key});
        my $met = $avg;
        $met .= sprintf($sdCmpFmt, $sd) if ($sd);
        push @mets, sprintf($fmt, $met);
        my ($lavg, $lsd) = @{$data->{$key.'Latency'}};
    }
    $txt .= "[b]IO MB/sec ".join('/', @mkys).": ".join(' / ', @mets)."[/b]";
    $txt .= "\n";
    print $txt;
}

sub row {
    # Convert the data structure back into a (simplified) row
    my $data = shift;
    my @row =  ($data->{Machine}, $data->{FileSize}, $data->{N});
    push @row, map { $data->{$_}[0] } @main;
    push @row, map { $data->{$_}[2] } @main;
    push @row, map { $data->{$_.'Latency'}[0] } @main;
    return wantarray ? @row : \@row;
}

sub format_tiddlywiki {
    # [url]http://tiddlywiki.com/[/url]
    my $row  = &row( shift );
    # Bold the main metrics:
    map { $row->[$_] = "''".$row->[$_]. "''" } (3..5);
    printf($twFmt, map { $_ || "" } @{$row});
}

sub format_tsv {
    my $row = &row( shift );
    print join("\t", @{$row}) ."\n";
}

sub format_csv {
    my $row = &row( shift );
    print join(",", @{$row}) ."\n";
}

sub sigfig {
    # Round a value to specified precision
    my ($val, $sf) = @_;
    $sf ||= $precision;
    my $rv = $val;
    if ($val > 0) {
        my $exp = log($val) / $l10;
        my $mod = 10 ** (int($exp) - $sf +  ($exp > 0 ? 1 : 0));
        $rv = int(0.5 + $val / $mod) * $mod;
    }
    return $rv;
}

sub parse {
    foreach my $req (@ARGV) {
        if ($req =~ /^(bb|tw|csv|tsv)$/i) {
            # This is a configuration parameter
            $params{uc($1)} = 1;
        } elsif ($req =~ /[\+\,]{6,}/) {
            # Will assume that a stetch of '+' and ',' are a Bonnie++ line
            &parse_line( $req );
        } elsif (-s $req) {
            # A file, presume it contains Bonnie++ results
            &parse_file( $req );
        } else {
            &msg("Unrecognized parameter", $req);
        }
    }
}

sub parse_file {
    my $file = shift;
    if (open(FILE, "<$file")) {
        while (<FILE>) {
            if (/[\+\,]{6,}/) {
                &parse_line($_);
            }
        }
        close FILE;
    } else  {
        &msg("Failed to read file", $file, $!);
    }
}

sub parse_line {
    my $line = shift || "";
    $line    =~ s/[\n\r]+$//;
    my @row  = split(/\s*\,\s*/, $line);
    my %data;
    while (my ($key, $col) = each %{$colMap}) {
        my $val = $row[$col];
        if ($isTime{$key}) {
            # Normalize time to miliseconds
            if ($val =~ /(\d+)([a-z]+)/) {
                $val  = $1;
                my $u = lc($2);
                if (my $fact = $timeMap->{$u}) {
                    $val *= $fact;
                } else {
                    &msg("Unrecognized time unit '$u'");
                }
            }
        } elsif ($isRate{$key}) {
            # Normalize to MB/sec
            $val /= 1024;
        }
        $data{$key} = $val;
    }
    unless ($data{Read}) {
        &msg("Failed to parse Bonnie++ line", $line);
        return;
    }
    my $machine = $data{Machine} || "Computer";
    $results ||= {};
    push @{$results->{$machine}{raw}}, \%data;
}

sub msg {
    warn join("\n  ", map { defined $_ ? $_ : '' } @_)."\n";
}

sub avg_stdv {
    my $arr = shift;
    my $n   = $#{$arr} + 1;
    my ($sum, $sum2) = (0,0);
    foreach my $val (@{$arr}) {
        $sum  += $val;
        $sum2 += $val * $val;
    }
    my $avg = $sum / $n;
    my $std = 0;
    my $percSD = 0;
    if ($n > 2) {
        $std = sqrt(($sum2 / $n) - ($avg * $avg));
        $percSD = &sigfig(100 * $std / $avg);
    }
    $avg = &sigfig($avg);
    $std = &sigfig($std);
    if ($percSD < 1) {
        # Do not bother reporting StdDev less than 1% of average
        $percSD = $std = 0;
    }
    return ($avg, $std, $percSD);
}
 
File Integrity Checking with mtree. I found this wonderful script here and have modified it slightly.
Code:
#!/usr/local/bin/perl -w

# Globals ---------------------------------------------------------------------

# Sane default location of the mtree executable.
# Can be changed with the --mtree option.
my $mtree="/usr/sbin/mtree";
# Sane default location of the mtree file checksum database.
# Can be changed with the --checksum-file option.
my $checksum_file="/usr/mtree/fs.mtree";
# Sane default location of the mtree file exclude list.
# Can be changed with the --exclude-file option.
my $exclude_file="/usr/mtree/fs.exclude";
# Stores the executable name, mainly to refer to ourselves in help.
my $executable=$0;
# Stores the list of filesystem changes reported by mtree.
my $changes="";
# Stores the list of e-mail addresses to send results to.
my @emails;
# Whether or not to scan for file changes. 
# (default behavior, disabled in case of -uo)
my $scan_for_changes=1;
# Whether or not to update the checksums. (requires -u flag)
my $update=undef;
# Top level directory to monitor for changes. 
# (can be edited with the -p option.)
my $path="/";
# Whether or not to print scan results to stdout. 
# (Default behavior, see -q option to disable.)
my $print_results=1;
# Path to the sendmail executable.
# (see --sendmail option to change.)
my $sendmail="/usr/sbin/sendmail";
# Logfile location.
# (see -l option to change.)
my $log="/var/log/mtree.log";
# e-mail reply-to address. (see --reply-to option)
my $reply_to=undef;
# e-mail subject (see --subject option).
my $subject="Filesystem changes for " . `date`;

# Display script usage & help. ------------------------------------------------

sub show_help
{
  print '
  Usage: ' . $executable . ' [OPTION] ...
  Show or E-mail out a list of changes to the file system.

  mtree operation options:

    -u,  --update        Updates the file checksum database after 
                         showing/mailing changes.
    -uo, --update-only   Only update the file checksum database.
    -p,  --path          Top level folder to monitor (default: /)
    -q,  --quiet         Do not output scan results to stdout or any
                         other output.

  Path configuration options:

    -l,  --log           Logfile location 
                         (default: /var/log/mtree.log)
         --mtree         Set the location of the mtree executable. 
                         (default is /usr/sbin/mtree)
         --checksum-file Set the location of the file containing the 
                         mtree file checksums. 
                         (defaul: /usr/mtree/fs.mtree)
         --exclude-file  Set the location of the file containing the 
                         list of files and folders to exclude from the 
                         mtree scan. (default is /usr/mtree/fs.exclude)

  E-mail options:

    -e,  --email         Adds specified e-mail address as destination.
         --sendmail      Set the location of the sendmail executable. 
                         (default: /usr/sbin/sendmail)
         --reply-to      Set the e-mail reply-to address.
         --subject       Sets The e-mail subject. 

  Misc options:

    -h,  --help          Display this help text.
 

  Example usage:

    ' . $executable . ' -uo
    ' . $executable . ' -u -q -e foo@example.com -e bar@example.com
    ' . $executable . ' /var/www --mtree /usr/local/sbin/mtree

';

}

# Parses a command line argument and it's param. ------------------------------ 

sub parse_commandline_argument
{
  my $arg = shift;
  my $param = shift;
  if (substr($arg,0,1) eq '-')
  {
    if ($arg eq '--mtree')
    {
      $mtree = $param;
    }
    if ($arg eq '--sendmail')
    {
      $sendmail = $param;
    }
    if ($arg eq '-q' or $arg eq '--quiet')
    {
      $print_results = undef;
    }
    if ($arg eq '--reply-to')
    {
      $reply_to = $param;
    }
    if ($arg eq '--subject')
    {
      $subject = $param;
    }
    if ($arg eq '--checksum-file')
    {
      $checksum_file = $param;
    }
    if ($arg eq '-l' or $arg eq '--log')
    {
      $log = $param;
    }
    if ($arg eq '--exclude-file')
    {
      $exclude_file = $param;
    }
    if ($arg eq '-h' or $arg eq '--help')
    {
      show_help();
      exit 0;
    }
    if ($arg eq '-e' or $arg eq '--email')
    {
      if ($param =~ m/\@/)
      {
        push(@emails,$param);
      }
      else
      {
        die "Invalid e-mail address: $param\n";
      }
    }

    if ($arg eq '-u' or $arg eq '--update')
    {
      $update=1;
    }
    if ($arg eq '-uo' or $arg eq '--update-only')
    {
      $update=1;
      $scan_for_changes=undef;
    }
  }
}

# Script entry point. ---------------------------------------------------------

# Parse commandline arguments.
my $argc=0;
foreach my $argument(@ARGV)
{
  chomp($argument);
  if ($argc != $#ARGV)
  {
    my $next_argument = $ARGV[$argc+1];
    chomp($next_argument);
    parse_commandline_argument($argument,$next_argument);    
  }
  else
  {
    parse_commandline_argument($argument);
  }
  $argc++;
}

# Check if we have all the necesary components.

(-x $mtree) or die "$mtree is not executable.\n";
(-w $checksum_file) or die "$checksum_file is not writeable.\n";
(-r $exclude_file) or die "$exclude_file is not readable.\n";
if ($scan_for_changes)
{
  (-w $log) or die "$log is not writeable.\n";
}
if ($#emails >= 1)
{
  (-x $sendmail) or die "$sendmail is not executable.\n";
}

if ($print_results)
{
  print "\nScanning for changes...\n";
}

# Get the list of changed files if desired.
if ($scan_for_changes)
{
  
  $changes=`$mtree -K md5digest,sha1digest,sha256digest,ripemd160digest,cksum -f $checksum_file -X $exclude_file -p $path`;  

  # If there are no changes since last scan, then 
  # we're done with everything.
  # <= 3 to account for \n\r and maybe a space...
  if (length($changes) <= 3 )
  {
    if ($print_results)
    {
      print "All done.\n";
    }    
    exit 0;
  }

  # Write changes to log file.
  open LOGFILE,">>$log" or die $!;
  print LOGFILE $changes;
  close LOGFILE;

  # Output changes if desired.
  if ($print_results)
  {
    print "$changes\n";
  }

  # E-mail out changes if desired.
  foreach my $mail(@emails)
  {
    if ($print_results)
    {
      print "E-mailing $mail ...\n";
    }
    chomp($mail);
    open(SENDMAIL, "|$sendmail -t") or die "Cannot open $sendmail: $!";
    print SENDMAIL "To: $mail\n"; 
    if ($reply_to)
    {
      chomp($reply_to);
      print SENDMAIL "Reply-to: $reply_to\n";      
    }
    if ($subject)
    {
      chomp($subject);
      print SENDMAIL "Subject: $subject\n";      
    }
    print SENDMAIL "Content-type: text/plain\n\n";
    print SENDMAIL $changes;
    close(SENDMAIL);
  }

}

# Update checksum file if desired.
if ($update)
{
  if ($print_results)
  {
    print "Updateing checksums...\n";
  }
  system("$mtree -K md5digest,sha1digest,sha256digest,ripemd160digest,cksum -c -X $exclude_file -p $path > $checksum_file");
}

if ($print_results)
{
  print "All done.\n";
}
 

# done.
exit 0;

The original script used the defaults within mtree, which means only the following information is stored for each file/directory:
  • flags - The file flags as a symbolic name.
  • gid - The file group as a numeric value.
  • mode - The current file's permissions as a numeric (octal) or symbolic value.
  • nlink - The number of hard links the file is expected to have.
  • size - The size, in bytes, of the file.
  • link - The file the symbolic link is expected to reference.
  • time - The last modification time of the file.
  • uid - The file owner as a numeric value.
I modified the script to also store the following:
  • cksum - The checksum of the file using the default algorithm specified by the cksum(1) utility.
  • md5digest - The MD5 message digest of the file.
  • sha1digest - The FIPS 160-1 (``SHA-1'') message digest of the file.
  • sha256digest - The FIPS 180-2 (``SHA-256'') message digest of the file.
  • ripemd160digest - The RIPEMD160 message digest of the file.
By default the script expects to be in /usr/mtree and you need to fund the following from the same directory that the script is in or the script will complain.
Code:
touch fs.mtree fs.exclude
touch /var/log/mtree.log
We can then exclude directories from the integrity checking by adding them to fs.exclude. The original script author recommended that at a minimum the following be excluded:
Code:
./dev
./proc
./var
./tmp
./usr/mtree
./usr/share/man
./usr/share/openssl/man
./usr/local/man
./usr/local/lib/perl5/5.8.8/man
./usr/local/lib/perl5/5.8.8/perl/man
Note that you have to prefix folders with ./

I run this script every hour via cron with the following, and will get an email alert if anything has changed.
Code:
@hourly root /usr/mtree/automtree -u -q -e youremail@example.com &> /dev/null
This script is not fool-proof. It is vulnerable to the following:
  1. An attacker can modify the script to not call mtree.
  2. An attacker can stop the script from running by modifying /etc/crontab.
  3. An attacker can add all directories to fs.exclude and run the script to update only, then future changes will not be detected.
  4. An attacker could modify the hashes and other attributes stored in fs.mtree.

I have set the script and fs.exclude to be immutable, which should protect against attack 1 and 3. You could also set your /etc/crontab to be immutable to protect against 2, but this script can not prevent attack 4 which is why I run it frequently.

To run the script you simply execute /usr/mtree/automtree -u.

This will updates the file checksum database after showing/mailing changes. If you have just edited a file and you want to update the file checksum database so that you won't get an alert you can run /usr/mtree/automtree -uo. This will only update the file checksum database.

I hope you enjoy this script and a big thanks to John Sennesael for originally creating the script.
 
That's a good question.

I found this script online, and it suited my needs. It was simple to configure, and offers a sufficient level of security for the home server I manage. I have not had sufficient time to invest in learning how to configure one of the more robust file integrity tools.

My understanding was that security/aide in single host mode and security/integrit did not do any validation of the file hash database and so are vulnerable to the same attacks I described above.

security/tripwire and security/samhain do sign the configuration and file integrity database. I've tried using samhain in the past, but I found it a real pain to correctly configure. I have not tried security/tripwire.
 
I've written myself a short script to allow normal users to mount USB devices. It was cobbled together from two or more tutorials, none of which worked for me by themselves. Therefore, it might contain some superfluous steps. It is supposed to be run as a normal user with root privileges. Feedback is welcome.

Code:
#!/bin/sh

# script must be run as root
if [ "$(id -u)" != "0" ]; then
   echo "This script must be run as root" 1>&2
   exit 1
fi

# making mount point "media" in user home directory...
if [ ! -d "$HOME/media" ]
	echo "creating directory $HOME/media..."
	mkdir $HOME/media
else 
	echo "the directory $USER/media already exists - continuing as normal"
fi

# giving user permission to read / write mount point
chown $USER:$USER $HOME/media

# giving normal users permission to mount devices
sysctl vfs.usermount=1

# making changes permanent in /etc/sysctl.conf
echo 'vfs.usermount=1' >> /etc/sysctl.conf

# change permissions and ownership for usb devices plugged in after boot
# making all da* devices readable and writable by their owner and wheel group
echo '[userrules=5]' >> /etc/devfs.rules
echo "add path 'da[0-9]*' mode 0660 group wheel" >> /etc/devfs.rules

# restarting devfs
/etc/init.d/devfs restart

# tell rc.conf to load the rule set every time when the system is booted
echo 'devfs_system_ruleset="userrules"' >> /etc/rc.conf

# AFAIK, the previous action doesn't affect devices available at boot

# lets change ownership and permissions of usb devices available at boot
# allow member of operator group to mount usb devices
echo 'own       /dev/da0       root:operator' >> /etc/devfs.conf
echo 'perm      /dev/da0      0660' >> /etc/devfs.conf
 

# need to add user to operator group in order to mount usb devices
# NB - as a "bonus" this will allow normal users to shutdown/reboot system
pw groupmod operator -m $USER

echo 'make sure your user is in the operator group - check the following output'
id $USER
 
Here is one for compiling freeglut applications from C source files.
Code:
#!/bin/sh

cc -static -c -I/usr/local/include $@.c -o $@.o
cc $@.o -L/usr/local/lib/ -lglut -lGLU -lGL -lX11 -lXext -lm -o $@
rm $@.o
 
cenu said:
Here is one for compiling freeglut applications from C source files.
Code:
#!/bin/sh

cc -static -c -I/usr/local/include $@.c -o $@.o
cc $@.o -L/usr/local/lib/ -lglut -lGLU -lGL -lX11 -lXext -lm -o $@
rm $@.o
I thought that you have been supposed to use Makefiles for that ;)
 
Heh,

At least use set -e to cause an immediate exit on the first error.

Code:
#!/bin/sh

set -e

cc -static -c -I/usr/local/include $@.c -o $@.o
cc $@.o -L/usr/local/lib/ -lglut -lGLU -lGL -lX11 -lXext -lm -o $@
rm $@.o

But yeah... Makefiles are generally good ;)
 
PORTUPDATER for PKGNG

This is what I use now, under the PKGNG framework. See if it works for you. Inspect any flags and paths before actually running it. I'm not responsible for you nuking your own installation, or installing 30,000 ports.

Note that, as before, running it as portupdater injects a random sleep of 0-3600 seconds. So you can put it in cron, and it will mail the output to the cron owner (usually root). If you run it on the command line, add any parameter, e.g. portupdater yes or portupdater a, and it will run immediately. This also means it will perform interactive tasks, so be prepared to go through the steps.

Code:
#!/bin/sh

! [ -d /usr/ports ] && echo "This works much better with an installed ports tree, run 'portsnap fetch extract' first" && exit 1
! [ -f /usr/local/sbin/pkg ] && echo "You have not installed ports-mgmt/pkg yet." && exit 1
! [ -f /usr/local/sbin/portmaster ] && echo "You have not installed ports-mgmt/portmaster yet." && exit 1

/usr/bin/touch /tmp/lastportupdate
hostname=$(hostname)
date=$(/bin/date)
day=$(/bin/date | /usr/bin/awk '{print $1,$2,$3}')
oldday=$(/bin/cat /tmp/lastportupdate)

echo "
Updating portaudit first.
"
/usr/local/sbin/pkg audit -F

echo "
Portupdater for ${hostname} started at ${date}


========== Fetching latest ports snapshot from server. ==================
"

if [ $# -lt 1 ]
then
portvar="cron"
else
portvar="fetch"
fi

/usr/sbin/portsnap ${portvar} || exit 1

echo "
========== Updating ports tree with new snapshot. =======================
"
/usr/sbin/portsnap update || exit 1
cd /usr/ports && make fetchindex || exit 1

echo "
============ Cleaning out all obsolete distfiles. =======================
"
/usr/local/sbin/portmaster -y --clean-distfiles || exit 1

if [ ${portvar} = "fetch" ]
then
echo "
Ah, you're actually here. Good.

Running some (possibly) interactive stuff.
"
/bin/sleep 5

echo "
============ Cleaning out stale ports. ==================================
"
/usr/local/sbin/portmaster -s || exit 1
echo "
============ Checking port dependencies. ================================
"
/usr/local/sbin/pkg check -dn || exit 1

echo "
============ Cleaning up /var/db/ports. =================================
"
/usr/local/sbin/portmaster --check-port-dbdir || exit 1
fi

echo "
=================== See which ports need updating. ======================
"
/usr/local/sbin/pkg version -ovL '=' || exit 1

echo "
================= Warnings from /usr/ports/UPDATING. ====================
"
weekago=$( /bin/date -v-1w +%Y%m%d )
lastpkg=$( ls -D %Y%m%d -ltr /var/db/pkg | /usr/bin/tail -n1 | /usr/bin/tr -s " " "\t" | /usr/bin/cut -f 6 )
if [ ${weekago} -lt ${lastpkg} ]
 then usedate=${weekago}
 else usedate=${lastpkg}
fi
/usr/local/sbin/pkg updating -d ${usedate}
echo "
See /usr/ports/UPDATING for further details.

========== Portupdater done. ============================================

"

echo "
======================Cleaning out old packages. ========================                                                                                                                                                                    
"    
/usr/local/sbin/pkg clean -y
 
I've created a Python script to download various IP blacklists. It's not quite 1.0 yet. The regex filtering does not yet work, and I have not tested the auto reloading of the rules. I've put the code up on github. I welcome any contributions.

Code:
import urllib2
import re
import sys, argparse
import subprocess

#blocklist information

blocklists = {
	'abuse.ch Zeus Tracker (Domain)': {
		'id': 'abusezeusdomain',
		'type': 'list',
		'checks': ['domain'],
		'url':  'https://zeustracker.abuse.ch/blocklist.php?download=baddomains',
		'regex' : '',
		'file' : 'zeus.domain',
		'table' : 'zeus_domain'
	},
	'abuse.ch Zeus Tracker (IP)': {
		'id': 'abusezeusip',
		'type': 'list',
		'checks': ['ip', 'netblock'],
		'url': 'https://zeustracker.abuse.ch/blocklist.php?download=badips',
		'regex' : '',
		'file' : 'zeus.pf',
		'table' : 'zeus'
	},
	'abuse.ch SpyEye Tracker (Domain)': {
		'id': 'abusespydomain',
		'type': 'list',
		'checks': ['domain'],
		'url':  'https://spyeyetracker.abuse.ch/blocklist.php?download=domainblocklist',
		'regex' : '',
		'file' : 'spyeye.domain',
		'table' : 'spyeye_domain'
	},
	'abuse.ch SpyEye Tracker (IP)': {
		'id': 'abusespyip',
		'type': 'list',
		'checks': ['ip', 'netblock'],
		'url':  'https://spyeyetracker.abuse.ch/blocklist.php?download=ipblocklist',
		'regex' : '',
		'file' : 'spyeye.pf',
		'table' : 'spyeye'
	},
	'abuse.ch Palevo Tracker (Domain)': {
		'id': 'abusepalevoip',
		'type': 'list',
		'checks': ['domain'],
		'url':  'https://palevotracker.abuse.ch/blocklists.php?download=domainblocklist',
		'regex' : '',
		'file' : 'palevo.domain',
		'table' : 'palevo_domain'
	},
	'abuse.ch Palevo Tracker (IP)': {
		'id': 'abusepalevoip',
		'type': 'list',
		'checks': ['ip', 'netblock'],
		'regex': '',
		'url':  'https://palevotracker.abuse.ch/blocklists.php?download=ipblocklist',
		'file': 'palevo.pf',
		'table' : 'palevo'
	},
	'malwaredomains.com IP List': {
		'id': 'malwaredomainsip',
		'type': 'list',
		'checks': ['ip', 'netblock'],
		'url': 'http://www.malwaredomainlist.com/hostslist/ip.txt',
		'regex' : '',
		'file' : 'malwaredomains.pf',
		'table' : 'malwaredomains'
	},
	'malwaredomains.com Domain List': {
		'id': 'malwaredomainsdomain',
		'type': 'list',
		'checks': ['domain'],
		'url': 'http://www.malwaredomainlist.com/hostslist/hosts.txt',
		'regex': '',
		'file' : 'malwaredomains.domain',
		'table' : 'malwaredomains_domain'
	},
	'PhishTank': {
		'id': 'phishtank',
		'type': 'list',
		'checks': ['domain'],
		'url': 'http://data.phishtank.com/data/online-valid.csv',
		'regex': '/^(https?:\/\/)?([\da-z\.-]+)\.([a-z\.]{2,6})([\/\w \.-]*)*\/?$/',
		'file' : 'phishtank.domain',
		'table' :'phishtank_domain'

	},
	'malc0de.com List': {
		'id': 'malc0de',
		'type': 'list',
		'checks': ['ip', 'netblock'],
		'url': 'http://malc0de.com/bl/IP_Blacklist.txt',
		'regex' : '',
		'file' : 'malc0de.pf',
		'table' : 'malc0de'
	},
	'TOR Node List': {
		'id': 'tornodes',
		'type': 'list',
		'checks': [ 'ip', 'netblock' ],
		'url': 'http://torstatus.blutmagie.de/ip_list_all.php/Tor_ip_list_ALL.csv',
		'regex' : '',
		'file' : 'tornodes.pf',
		'table' : 'tor_nodes'

	},
	'blocklist.de List': {
		'id': 'blocklistde',
		'type': 'list',
		'checks': [ 'ip', 'netblock' ],
		'url': 'http://lists.blocklist.de/lists/all.txt',
		'regex' : '',
		'file' : 'blocklistde.pf',
		'table' : 'blocklistde'
	},
	'Autoshun.org List': {
		'id': 'autoshun',
		'type': 'list',
		'checks': [ 'ip', 'netblock' ],
		'url': 'http://www.autoshun.org/files/shunlist.csv',
		'regex': '\((\b(?:(?:25[0-5]|2[0-4]\d|[01]?\d\d?)\.){3}(?:25[0-5]|2[0-4]\d|[01]?\d\d?))\b\)ssss',
		'file' : 'autoshun.pf',
		'table' : 'autoshun'
	},
	'Internet Storm Center': {
		'id': 'isc',
		'type': 'query',
		'checks': [ 'ip' ],
		'url': 'https://isc.sans.edu/api/topips/records/1000/today/handler?json',
		'regex': '(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b',
		'file': 'isc.pf',
		'table' : 'storm_center:'
	},
#	'AlienVault IP Reputation Database': {
#		'id': 'alienvault',
#		'type': 'list',
#		'checks': [ 'ip', 'netblock' ],
#		'url': 'https://reputation.alienvault.com/reputation.generic',
#		'regex': '',
#		'file': 'alienvault.pf',
#		'table' : 'alienvault'
#	},
	'OpenBL.org Blacklist': {
		'id': 'openbl',
		'type': 'list',
		'checks': [ 'ip', 'netblock' ],
		'url': 'http://www.openbl.org/lists/base.txt',
		'regex' : '',
		'file' : 'openbl.pf',
		'table' : 'openbl'
	},
	'Nothink.org SSH Scanners': {
		'id': 'nothinkssh',
		'type': 'list',
		'checks': [ 'ip', 'netblock', 'domain' ],
		'url': 'http://www.nothink.org/blacklist/blacklist_ssh_week.txt',
		'regex' : '',
		'file' : 'nothinkssh.pf',
		'table' : 'nothinkssh'
	},
	'Nothink.org Malware IRC Traffic': {
		'id': 'nothinkirc',
		'type': 'list',
		'checks': [ 'ip', 'netblock', 'domain' ],
		'url': 'http://www.nothink.org/blacklist/blacklist_malware_irc.txt',
		'regex' : '',
		'file' : 'nothinkirc.pf',
		'table' : 'nothinkirc'
	},
	'Nothink.org Malware HTTP Traffic': {
		'id': 'nothinkhttp',
		'type': 'list',
		'checks': [ 'ip', 'netblock', 'domain' ],
		'url': 'http://www.nothink.org/blacklist/blacklist_malware_http.txt',
		'regex' : '',
		'file' : 'nothinkhttp.pf',
		'table' : 'nothinkhttp'
	},
	'C.I. Army Malicious IP List': {
		'id': 'ciarmy',
		'type': 'list',
		'checks': [ 'ip', 'netblock' ],
		'url': 'http://cinsscore.com/list/ci-badguys.txt',
		'regex' : '',
		'file' : 'ciarmy.pf',
		'table' : 'ciarmy'
	},
	'Spamhaus drop list': {
		'id': 'spamhaus',
		'type': 'list',
		'checks': [ 'ip', 'netblock' ],
		'url': 'http://www.spamhaus.org/drop/drop.lasso',
		'regex' : '',
		'file' : 'spamhaus.pf',
		'table' : 'spamhaus'
	},
	'Emerging Threats - Russian Business Networks List': {
		'id': 'emergingthreats-rbn',
		'type': 'list',
		'checks': [ 'ip', 'netblock' ],
		'url': 'http://rules.emergingthreats.net/blockrules/rbn-ips.txt',
		'regex' : '',
		'file' : 'emergingthreats-rbn.pf',
		'table' : 'emergingthreats-rbn'
	},
	'Project Honeypot': {
		'id': 'projecthoneypot',
		'type': 'list',
		'checks': [ 'ip', 'netblock' ],
		'url': 'http://www.projecthoneypot.org/list_of_ips.php?t=d&rss=1',
		'regex' : '',
		'file' : 'projecthoneypot.pf',
		'table' : 'projecthoneypot'
	},
	'Rulez.sk blocklist': {
		'id': 'rulez.sk',
		'type': 'list',
		'checks': [ 'ip', 'netblock' ],
		'url': 'http://danger.rulez.sk/projects/bruteforceblocker/blist.php',
		'regex' : '',
		'file' : 'rulez.sk.pf',
		'table' : 'rulez.sk'
	}
		
}

def downloadAndProcessBlocklist(url, regex, filename):
	req = urllib2.Request(url)
	req.add_header('User-Agent', 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)')

	#download blocklist
	try:
		response = urllib2.urlopen(req)
		contents = response.read()

	except urllib2.URLError as e:
		if hasattr(e, 'reason'):
			print 'We failed to reach a server.'
			print 'Reason: ', e.reason
		elif hasattr(e, 'code'):
			print 'The server couldn\'t fulfill the request.'
			print 'Error code: ', e.code
		else:
			print 'unknown error'

	#process blocklists
	if regex != '':
		
		match = re.findall(regex, contents)
		
		print match
		
		contents = match

	#write to file
	try:
		with open(location+filename, 'w') as f:
			f.write(contents)
			f.close()
	except IOError as e:
  		print e.reason

def reloadFirewallRules(firewall, location, table):

	if firewall == 'pf':

		print ('/sbin/pfctl l -t ' + table + ' -Tr -f ' + location+value['file'])
		#subprocess.call(['/sbin/pfctl l -t ' + table + ' -Tr -f ' + location+value['file']])

#	if firewall == 'iptables':
		#todo

# main

#sensible defaults
firewall = 'pf'
listType = 'ip'
location = '/root/tables/'

parser = argparse.ArgumentParser(description='IP blocklist downloader and importer for pf and ip tables')
parser.add_argument('-fw', '--firewall_type',help='firewall type, currently pf and iptables are supported', required=False)
parser.add_argument('-t', '--blocklist_type',help='blocklist type, currently ip, netblock, domain and all are supported', required=True)
parser.add_argument('-l', '--blocklist_location',help='location to store blocklists', required=False)
parser.add_argument('-n', '--blocklist_names',help='specify names of blocklists to download', required=False, type=lambda s: [str(item) for item in s.split(',')])

args = parser.parse_args()

if args.blocklist_type in ['ip','domain','netblock']:
	listType = args.blocklist_type
else:
	print('Invalid option, only ip, domain and netblock are currently supported')
	sys.exit(2)

if args.firewall_type != None:
	firewall = args.firewall_type

if args.blocklist_location != None:
	location = args.blocklist_location


for key, value in sorted(blocklists.items()):

	#download all blocklists of the given type
	if args.blocklist_names == None:
		if listType in value['checks']:
			print('downloading '+key)
			downloadAndProcessBlocklist(value['url'], value['regex'], value['file'])
			reloadFirewallRules(firewall, location, value['table'])
	else:
		#download specified blocklists
		if value['id'] in args.blocklist_names:
			print('downloading '+key)
			downloadAndProcessBlocklist(value['url'], value['regex'], value['file'])
			reloadFirewallRules(firewall, location, value['table'])
 
Sometimes you'll need a temporary directory to test an idea (to help on a forum :) ), build source, etc.
A simple csh-alias:
Code:
alias use-sb 'setenv SANDBOX `mktemp -d /home/zsolt/sandbox/sandbox.XXXXXXXX` ; tcsh -l ; rm -rf ${SANDBOX}'
and relevant part of my ~/.login:
Code:
    if (${?SANDBOX} == 1) then
        cd ${SANDBOX}
        set prompt = "${green}%n ${white}|${cyan} %T ${white}| ${yellow}%~${end} \n${magenta}<sandbox>${blue} $ ${end} "
    endif

When I run use-sb, runs a new tcsh-shell, the prompt will be "sandbox-specify", cd into sandbox-directory. When I exit from this shell (Control-D), the sandbox directory will be erased.
 
Back
Top