Useful scripts

Since I am new here I am still trying to wrap my head around basic package management, who installs what, from where etc. So here are two one liners that help me a lot:

List all explicitly installed packages from 'unknown-repository' (so lists all ports?), sorted by size:
Code:
pkg query -e '%a = 0' "%n;%v;%sh;%R;%sb" | grep -i unknown-repository | sort --field-separator=';' -g -k5 | cut -f 1,2,3,4 -d';' | column -s';' -t

Same but for known repositories (so lists all binary packages?):
Code:
pkg query -e '%a = 0' "%n;%v;%sh;%R;%sb" | grep -v unknown-repository | sort --field-separator=';' -g -k5 | cut -f 1,2,3,4 -d';' | column -s';' -t

Sizes are not ideal since I don't count all the dependencies but it is good enough to give a bird view of the system userland.
 
A command line I use often, to corral bunches of files into a directory:

find . -type f -name "*.txz" -exec cp {} /usr/home/ron/collection \;


It's old as dirt, and pretty obvious, but maybe not to some noobies. It searches, starting at the dot dir, for all files with the "txz" extension, and puts them into the ~/collection directory.
 
Some while ago I was fed up with all the linux package management tools. Every one of of them works differently. So I decided to write simple wrapper called repkg.
Here's link to repository: https://github.com/graudeejs/repkg

The idea is to have common interface that will abstract package management in a way that would be similar to pkgng.
 
A command line I use often, to corral bunches of files into a directory:

find . -type f -name "*.txz" -exec cp {} /usr/home/ron/collection \;


It's old as dirt, and pretty obvious, but maybe not to some noobies. It searches, starting at the dot dir, for all files with the "txz" extension, and puts them into the ~/collection directory.

is the swiss army knife for me, works across FreeBSD and Linux systems..classic but util
 
As a web systems developer I often need to dump production database. However to be able to do that, I also need to ssh to production sever (due to firewall on DB server).
This is very cumbersome.
To make my life easier I've developed two scripts that utilize my password store.

db.sh allows me to access production db from remote host by simply typing
Code:
db.sh db/production
dump_db.sh allows me to dump production db from remote host
Code:
dump_db.sh db/production > dump.sql.gpg
To improve performance, I'm compressing dumped SQL with gzip server side, then ungzip on local PC. Instantly dump is gpg encrypted with my public key (because I'm on laptop and I don't fully trust hardware based encryption)

The password store file contains simple configuration.
Something like
Code:
db_password
ssh: user@example.com
host: in-the-cloud.eu-west-1.rds.amazonaws.com
adapter: mysql
username: db_username
database: db_name
port: 3306

P.S.
While I'm here I want to mention Nitrokey that ensures that my private keys stay safe.
 
I have a couple of scripts that I use to retrieve U.S Weather service information. I prefer these over Gnome/KDE/Xfce4 weather applets because they retrieve from U.S rather than the hard coded source in Northern Europe. Also does not constantly poll a site saving cpu cycles and bandwidth.
This one retrieves Aviation weather from the nearest airport that offers weather services. My nearest airport is in Yakima,WA (KYKM). Alternative US airport codes are here.
Code:
#!/bin/sh
curl -sk http://tgftp.nws.noaa.gov/data/observations/metar/decoded/KYKM.TXT | \
fold -w 78 -s
echo ""
printf "<Enter to Close>"; read nothing

I've coupled it with a x11/yad entry that places a small cloud in my panels system tray and it generates this:
Code:
YAKIMA AIR TERMINAL, WA, United States (KYKM) 46-34N 120-32W 324M
Oct 26, 2017 - 05:53 PM EDT / 2017.10.26 2153 UTC
Wind: from the NNE (020 degrees) at 5 MPH (4 KT):0
Visibility: 10 mile(s):0
Sky conditions: clear
Temperature: 66.9 F (19.4 C)
Dew Point: 34.0 F (1.1 C)
Relative Humidity: 29%
Pressure (altimeter): 30.29 in. Hg (1025 hPa)
ob: KYKM 262153Z 02004KT 10SM CLR 19/01 A3029 RMK AO2 SLP256 T01940011
cycle: 22

<Enter to Close>

Code:
# Start weather system tray applet
(sleep 2 && \
yad --notification --image=weather-overcast \
 --text="Yakima, WA Weather" --no-middle \
 --command='xterm +sb -g 72x16-0+38 \
 -T "Yakima, WA Weather" \
 -e "/home/jsh/scripts/weather.sh"') &

A similiar script can be used to obtain a Terminal Area forecast:
Code:
#!/bin/sh
# This is a simple script that downloads current weather conditions and zone
# forecast from the National Weather Service and formats the output.
#
# To change the forecast zone, replace wa/waz027 with another forecast zone.
# See <http://weather.noaa.gov/pub/data/forecasts/zone/> for a list.
#
curl -sk http://tgftp.nws.noaa.gov/data/forecasts/zone/wa/waz027.txt| \
fold -w 78 -s
echo ""
printf "<Enter to Close>"; read nothing

Lastly, a script to run an animated gif of the 4 latest radar images using graphics/imagemagick

Code:
#!/bin/sh

# This is a simple script that downloads an animated gif of
# the latest 4 radar images
# This script is configured for the Pacific Northwest composite
#

# To change the site edit "PACNORTHWEST_loop.gif".

curl https://radar.weather.gov/ridge/standard/PACNORTHWEST_loop.gif | \
 animate -immutable -loop 0 -title "NorthWest Radar Loop"

Airports are by ICAO codes. You can browse what is available here
 
Last edited:
Here's trick how I attach my GELI encrypted disks using sysutils/password-store:
Code:
$ pass show geli/password/in/password-store | head -n 1 | sudo geli attach -p -k - /dev/da0

This way password is passed via stdin as keyfile for geli.
To initialize geli safely, first generate password with password store, then run something like:
Code:
$ pass show geli/password/in/password-store | head -n 1 | sudo geli init -P -K - /dev/da0

Also to ensure that I don't forget how to attach disk, I simply added command into comments of password store entry.

Note that if you're to save password in keyfile, ensure that it ends with newline since it's not stripped from command line ;)
 
If you're using ProtonVPN and ever tried to setup OpenVPN on FreeBSD you might notice that if you generate config for linux there are two lines that don't work on FreeBSD
Code:
up /etc/openvpn/update-resolv-conf
down /etc/openvpn/update-resolv-conf
A quick google search will unveil https://github.com/masterkorp/openvpn-update-resolv-conf, however my problem with this script is that it uses bash.

To solve this bug, I forked repo and rewrote code to work with sh on FreeBSD https://github.com/graudeejs/openvpn-update-resolv-conf-freebsd/blob/master/update-resolv-conf.sh


Warning. It's not fully tested as I didn't bother to test all possible usecases, however for my ProtonVPN setup it works like charm. If you have problems with it feel free to file a bug report in github.
 
Speaking of handy one-liners: from time to time I find myself in a single boot console, often without proper terminal settings and practically unable to use standard (vi) editor. That's where ed friend comes in very handy. Let's say I need to edit config file and change the line "Port 22" to "Port 666":

printf "/^Port/s/Port 22/Port 666/\nw\nq" | ed -s test.conf

Tested on FreeBSD, Linux, HP-UX and Solaris.
 
some useful commands I always keep in my ~/.bin directory:

- uptime in mins:
uptime | awk -F, '{sub(".*up ",x,$1);print $1}' | sed -e 's/^[ \t]*//'

- installed package count:
doas pkg info | wc -l | sed -e 's/^[ \t]*//'

- zroot/ROOT/default free %:
df -hk | egrep 'ROOT|/$' | awk '{print$5}'

I have yet to find a way a easy way to measure free RAM percentage in FreeBSD, which doesn't involve operations between vm.stats.vm.v_*_count sysctl values. All the more I' m not sure I truly understood the relationship between inactive, free, wired and cached memory. NetBSD's mount_procfs(8) brings some additional nodes for compatibility with Linux, so as that when the pseudo-filesystem is mounted (default) , i can always rely on /proc/meminfo to grep all needed info, included free mem percentage with something like:
$(awk -F ':|kB' '/MemFree:/ {printf $2}' /proc/meminfo) / 1024

For FreeBSD however, I found a very cool script online, that I'm going to upload immediately beneath (as 'mem.txt')
To just get the free mem percentage, you can pipe it through awk and cut like that :
mem.sh | awk 'FNR==18 {print $6}' | cut -c 1-3


Off-Topic: graudeejs, looking at your profile photo, are you Ibara from OpenBSD? I'm a great fan of his ports including oksh, mg, and streamlink :cool:
 

Attachments

  • mem.txt
    5.3 KB · Views: 562
"For my ally is the Shell, and a powerful ally it is. Jobs creates it, makes them log. Its commands surround us and guide us. Scripting beings are we! Not those crude mouse clickers! You must use the Shell around you.. here, between mouse and keyboard. Even between... the browser and the forum.."

uhm.. I'm in that mood again :p

I've messed with shell scripts a lot over the years and I'd even go as far as to state that the shells are the glue which keep our systems together. Alas, even though I've done plenty of scripting I'm also lazy at times. So when I had to process a list like this:

Code:
$ ls | sed -E 's/-[0-9]+\..*\txz//g' | uniq -cd                            
   2 boost-libs
   5 ffmpeg
   2 glib
   3 harfbuzz
   3 harfbuzz-icu
... I quickly resorted to /bin/csh because it could somehow grok the list much better:
Code:
$ for a in `cat list`; do echo $a | cut -w -f2; done
2
boost-libs
5
ffmpeg
2
glib
3
harfbuzz
3
harfbuzz-icu
If you try this out for yourself you'll notice that the cut command didn't do anything at all. Even using quotes around $a won't make a difference. The C shell on the other hand...
Code:
% foreach a ("`cat list`")
foreach? echo $a | cut -w -f2
foreach? end
boost-libs
ffmpeg
glib
harfbuzz
harfbuzz-icu
Change -f2 into -f1 and you get the amount of occurrences.

Now, I did Google this a few times and even though I did see IFS getting mentioned several times I never gave it too much thought, also because csh(1) never mentions it and because the given explanation was often plain out poor.

Well, today I finally dove into sh(1) and I figured it out :)

IFS, or the Input Field Separator, determines what characters are to be used for "field splitting":

Code:
     IFS           Input Field Separators.  The default value is <space>,
                   <tab>, and <newline> in that order.  This default also
                   applies if IFS is unset, but not if it is set to the empty
                   string.
<CUT>
     Embedded newlines before the end of the output are not
     removed; however, during field splitting, they may be translated into
     spaces depending on the value of IFS and the quoting that is in effect.
<CUT>
     Subsequently, a field is delimited by either

     1.   a non-whitespace character in IFS with any whitespace in IFS
          surrounding it, or

     2.   one or more whitespace characters in IFS.

     If a word ends with a non-whitespace character in IFS, there is no empty
     field after this character.
This seems all very vague and theoretical, I know, but stay with me for now. Also very important to know is this:
Code:
     Dollar-Single Quotes
             Enclosing characters between $' and ' preserves the literal
             meaning of all characters except backslashes and single quotes.
             A backslash introduces a C-style escape sequence:

             \n          Newline
It got me thinking... SO:

Code:
#!/bin/sh

IFS=$'\n'

for a in `cat list`; do
        echo $a | cut -w -f2;
done
And here is the list again which I used:
Code:
   2 boost-libs
   5 ffmpeg
   2 glib
   3 harfbuzz
   3 harfbuzz-icu
Try commenting out IFS and see what happens.

(edit): The key to this mystery is that if you read carefully you'll notice that <space> comes before end of line. So, uhm, what do you think caused those indents before and after the numbers?

As mentioned: I know that IFS gets mentioned several times on the Net already, it's also why I got pointed towards it. But most of the given examples never bother to explain why it does what it does, and that is simply not good enough for me.

I don't care about working solutions until I know & understand what makes them tick. And now I do :)

Hope this could be useful for some of you!
 
# no # for a in `cat list`; do echo "$a" | cut -d' ' -f2 ; done

# don't do that because "echo" works differently on different unix so it's not wise to use echo to echo file text lines to text utilities (and the echo -e thing also isn't available on other unix). use cat(1) or supply the file name to the utility.

# in general Never use $a in bash or csh. use "$a" because if it's not in quotes: it may be INTERPRETED by the shell and end up running software you had no intent of running ! ouch!

# store your list in a file first and the following works

cat list | cut -d' ' -f2

# also, give -d' ' not -w, other unixes will not take -w. apple imac bsd says:

STANDARDS
The cut utility conforms to IEEE Std 1003.2-1992 (``POSIX.2'').

HISTORY
A cut command appeared in AT&T System III UNIX

# while IFS works, it's more of a hastle and really you should restore IFS it unless your sure the script won't be sourced by another script.
IFS=$XIFS
IFS=$'\n'
# do something
IFS=$XIFS

# finally: many would urge you to learn awk

cat list | awk '{print $2}'
ls -la | awk '{print $9}'

# but really you should use find if you want a list of file names because it's the right tool for admins to use in scripts

find . -type f -maxdepth 1

# but do NOT use this - because if you ever walk your script elsewhere you'll find -printf %f no longer works
# because not all unix support it (instead one prints out full info and selects fields with cut or awk)
find -type f -printf %f

# ls -1
# ls -1 seems to work but if your file system supports sockets mounts symbolics and case sensitivity (or rather non-sensitivity) is an issue, you can get in trouble if your not in a "simple place". find let's avoid listing things the kernels show that aren't really files are aren't really located "here" (you have to see it's options for more info)
 
# no # for a in `cat list`; do echo "$a" | cut -d' ' -f2 ; done

# don't do that because "echo" works differently on different unix so it's not wise
You do realize what OS this forum is all about, right? ;) Why on earth would I bother myself what other environments might do? Not to mention that your motivation is actually quite off too, this has nothing to do with the OS itself but more so with the shell you're using.

And /bin/sh ('Bourne') is pretty much a set out standard within this field. I've been using this specific construction dating back to the days of SunOS around 1995 or so and have continued to use this throughout dozens of shell scripts on SunOS / Solaris, HpUx, Linux and several BSD variants, never running into problems.

At best it can be changed so that the carrots get exchanged for $() which can make it a little more readable, but the rest of your argumentation is quite off in my opinion.

# in general Never use $a in bash or csh.
This was never about bash nor csh in the first place. (edit) Also your argumentation is once again quite off; interpretation would only happen when you'd use the variable outside the scope of anything else, which is obviously not very efficient to begin with.

But the funny part is that even quotes won't make too much of a difference because although your current shell instance may not interpret the variable, the shell that gets called will.

Code:
#!/bin/sh

a="echo m000"

$a
"$a"
I'm sure this doesn't give you the results you'd expect :D

# store your list in a file first and the following works
lol!

Sorry, but I can't take this comment too seriously. That would only create more unnecessary clutter, add extra unneeded complexity and in general would actually tax the system more, depending on what you're doing.

(edit2) You share a lot of dry theory but no compelling arguments which actually make some sense to me. As I mentioned earlier: this is FreeBSD we're talking about, so commenting that it's best not to use certain commandline parameters just because of other operating systems is simply absurd IMO.
 
There's no need to cat ... | ....
Code:
awk '{print $2}' list
Now that's an argument I can appreciate, and you're completely right. awk is one of those things I still need to study a lot more myself. Have been using it in the past several times (mostly to process config files and at one time to retrieve the name of any configured jails) but very sparsely.
 
Allow me to comment …
Code:
$ for a in `cat list`; do echo $a | cut -w -f2; done
2
boost-libs
5
ffmpeg
2
glib
3
harfbuzz
3
harfbuzz-icu
To get the second word from a list of two-word lines, the correct way is actually also quite simple:
Code:
$ while read a b; do echo $b; done < list
boost-libs
ffmpeg
glib
harfbuzz
harfbuzz-icu
No need to use cut or any other external commands.
Also note that the use of backquotes (a.k.a. backticks) is discouraged for several reasons. Better use $(...) notation for command substitution.

I think that csh exists in FreeBSD's base system purely for historic reasons, but it shouldn't actually be used for anything, especially not for scripting. If you're curious, ask the search engine of your choice for “csh programming considered harmful”. ;-)

Apart from that, if a task grows sufficiently complex that you're forced to play with IFS, eval and other evil things, you should rather implement it with a more capable scripting or programming language, such as awk, Python or whatever.
 
I think that csh exists in FreeBSD's base system purely for historic reasons, but it shouldn't actually be used for anything, especially not for scripting.
Well, I definitely disagree with that. I've been using csh as the root shell for pretty much as long as I used FreeBSD and I think there's a major advantage in doing so.

Csh is fully aimed at interaction. A very good example can be seen above when I used a "for a in..." with sh an its "foreach a" counterpart in csh. The latter allows ("softly forces"?) you to cut up all your commands into smaller pieces and provide them one at a time. This makes it much easier to carefully check all your commands to verify that they're really going to do what you intended.

With sh (and most other bourne-like shells) not so much; when you enter enough commands then the first part of your line will eventually disappear from the screen and that's it.

As to scripting...

If you're curious, ask the search engine of your choice for “csh programming considered harmful”. ;-)
I read a few posts (I was familiar with some already) but it all boils down to "It's harmful because it works different than bourne", to which I can only reply "Well, duh!". I can't help but pick that up as people blaming the tool for its (in)abilities. And then you have plenty of people blindly copying that list of examples (they're not real arguments) as if it somehow holds any value within the context of good vs. bad. Yet it doesn't: it all boils down to csh behaving different than other shells. There's a solution to that and it's called a manual ;)

And csh also has plenty of advantages. When I test something then it makes sense to have only stdout ('|') or stdout + stderr ('|&') because who cares about them separately? An error is nothing without output ('context') and it gets hard to find a problem without errors ;) Of course that changes when you're scripting.

And well... in the list you can read about an issue with kill -l `cat foo` vs. /bin/kill -l `cat foo`. It doesn't help that I can't reproduce this on csh myself, but it gets worse when I notice that other shells behave in pretty much the same uncanny way:

Code:
peter@zefiris:/home/peter $ echo $0
-/usr/local/bin/ksh
peter@zefiris:/home/peter $ kill -l `cat bugzilla.png` 
: bad numberin/ksh: kill: PNG
peter@zefiris:/home/peter $ /bin/kill -l `cat bugzilla.png`
usage: kill [-s signal_name] pid ...
       kill -l [exit_status]
       kill -signal_name pid ...
       kill -signal_number pid ...
peter@zefiris:/home/peter $ csh -l
Nice bash prompt: PS1='(\[$(tput md)\]\t <\w>\[$(tput me)\]) $(echo $?) \$ '
                -- Mathieu <mathieu@hal.interactionvirtuelle.com>
% kill -l `cat bugzilla.png`
HUP INT QUIT ILL TRAP ABRT EMT FPE KILL BUS SEGV SYS PIPE ALRM TERM URG STOP 
TSTP CONT CHLD TTIN TTOU IO XCPU XFSZ VTALRM PROF WINCH INFO USR1 USR2 LWP 
% /bin/kill -l `cat bugzilla.png`
usage: kill [-s signal_name] pid ...
       kill -l [exit_status]
       kill -signal_name pid ...
       kill -signal_number pid ...
% sh -
$ kill -l `cat bugzilla.png`
usage: kill [-s signal_name] pid ...
       kill -l [exit_status]
       kill -signal_name pid ...
       kill -signal_number pid ...
$ /bin/kill -l `cat bugzilla.png`
usage: kill [-s signal_name] pid ...
       kill -l [exit_status]
       kill -signal_name pid ...
       kill -signal_number pid ...
Each to their own but I'd say that csh gave the most reasonable response in this scenario ;) And yes, I realize that the list is most likely dated, but I cannot help but wonder how the other shells behaved back then.

Now, don't get me wrong: I'm definitely not advocating csh scripting. Simply because I'm already familiar with plenty of those quirks. As I mentioned earlier: csh excels as an interactive tool but it's definitely not the best for scripting, sh is much more useful. But actually considering it harmful is IMO ridiculous; it's not the tool which causes damage, but the tool that's using it.

Apart from that, if a task grows sufficiently complex that you're forced to play with IFS, eval and other evil things, you should rather implement it with a more capable scripting or programming language, such as awk, Python or whatever.
That I fully agree with :) Still, sometimes for hobby projects it can be fun to take the shell into extremes and still get work done. Also because sometimes (not always) you'll have less overhead when using the shell itself than a full blown scripting language.

Even so... I really need to finally spend more time on learning awk :)

Thanks again for your comments!
 
Well, I definitely disagree with that. I've been using csh as the root shell for pretty much as long as I used FreeBSD and I think there's a major advantage in doing so.
When I started with UNIX in general (that was before FreeBSD even existed), I used csh as my login shell – actually tcsh, to be exact¹. But at some point its deficiencies started to annoy me, and I disliked the fact I couldn't simply paste parts from a shell script to the command line in order to try them out, modify them until they work, then paste them back into the shell script. At that point I decided it would be beneficial to switch to a bourne shell as my login shell. The only question was: which one? Back at that time, FreeBSD's /bin/sh was not really good for interactive work (it didn't have a history, for example). Some of my friends used bash, so I tried it. But then I discovered zsh, gave it a try, too, and liked it much better. Since then, zsh is my login shell.

¹ Note: FreeBSD's /bin/csh is not a real csh, but tcsh.
Csh is fully aimed at interaction. A very good example can be seen above when I used a "for a in..." with sh an its "foreach a" counterpart in csh. The latter allows ("softly forces"?) you to cut up all your commands into smaller pieces and provide them one at a time. This makes it much easier to carefully check all your commands to verify that they're really going to do what you intended.
Have you actually tried it? You can do the very same in bourne shell. But you're not forced to do it.
With sh (and most other bourne-like shells) not so much; when you enter enough commands then the first part of your line will eventually disappear from the screen and that's it.
Huh? That's simply not true. You can enter as much as fits into your terminal window's width + height. Although it's probably not a good idea to construct commands that large in a single interactive line. But you can do it if you want.

I'm not going to comment the other things you wrote … Let's just agree that we disagree. :)
Even so... I really need to finally spend more time on learning awk :)
That's definitely a good idea. Many people think that awk is only good for cutting rows (like '{print $1}') and are unaware that it is a full-fledged programming language. In fact you can write scripts in pure awk, just use #!/usr/bin/awk -f as the first line. (You can write pure sed scripts, too, of course …)

By the way, I have an alias root="su -m root", so whenever I type root, I get a root prompt with the same shell as my login shell (i.e. zsh in my case) without actually having to modify root's login shell (which I leave at /bin/csh because it is never used anyway).
 
(about parts of the command line disappearing)

Huh? That's simply not true. You can enter as much as fits into your terminal window's width + height.
You're right about sh, that was a mistake on my part, but it seems to depend on the shell you're using. I tested it on my laptop just now and noticed a difference in behavior between ksh (which is my personal favorite) and sh (where this didn't happen). Just for context, this is the behavior I meant:

1541980050552.png


I'm not sure from mind what causes this, come to think of it I suppose it could also be an effect of setting up a specific prompt, but what you see here is the end of for a in `ls | uniq -wd | tee test (useless command, just for testing).

Something for my never ending todo list on things to further examine & study :)
 
some useful commands I always keep in my ~/.bin directory:

- uptime in mins:
uptime | awk -F, '{sub(".*up ",x,$1);print $1}' | sed -e 's/^[ \t]*//'

- installed package count:
doas pkg info | wc -l | sed -e 's/^[ \t]*//'
....
Sensucht94 , please keep in mind that sed doesn't understand \t, that's a GNU extension. To catch both spaces and tabs you can use [[:blank:]] (or use gsed in your scripts instead).
 
(about parts of the command line disappearing)

You're right about sh, that was a mistake on my part, but it seems to depend on the shell you're using. I tested it on my laptop just now and noticed a difference in behavior between ksh (which is my personal favorite) and sh (where this didn't happen). Just for context, this is the behavior I meant:

View attachment 5517

I'm not sure from mind what causes this, come to think of it I suppose it could also be an effect of setting up a specific prompt, but what you see here is the end of for a in `ls | uniq -wd | tee test (useless command, just for testing).

Something for my never ending todo list on things to further examine & study :)
Ok, I see what you mean. I can confirm that it does not happen with FreeBSD's /bin/sh nor bash or zsh. At least not with the default settings. In zsh everything can be configured somewhow, so I wouldn't be surprised if you can enable such a “single-line” behavior if you want.

Out of curiosity, why is ksh your favorite? And which one of the various implementations? I mean, ksh93 is supposed to be the most standards-compliant, while mksh and oksh are somewhat more “modern” and have more features. However, their usability and features fall clearly behind both bash and zsh. The only trade-off in zsh is the fact that its syntax for creating co-processes differs from POSIX (even though zsh can emulate ksh in every other regard). That's a rarely used feature, though.
 
Back
Top