Other What's the go-to scripting language to learn for general purpose-scripting on freebsd?

The problem is that when you start a codebase you don't know how long it will be used and how large data pieces will be thrown at it and whether some constructs will be needed in the future that are not "shortcut" in the scripting language interpreter.
That is not usually the case with most scripts. But if this happens, I usually rewrite them and clean things up that were hacked in! If one waits too long and the script grows to much more than 1000 lines, the motivation for rewriting goes rapidly down!

And I avoid python if possible! If I feel data structures are needed right at start, I use C/V/Go etc.
 
That is not usually the case with most scripts. But if this happens, I usually rewrite them and clean things up that were hacked in! If one waits too long and the script grows to much more than 1000 lines, the motivation for rewriting goes rapidly down!

I dunno. It is hard to find the threshold and end up acting at the right moment.

Also, re-writes always have a tendency to have bugs in the base code you have translated, so there's extra trouble ahead.
 
Also, re-writes always have a tendency to have bugs in the base code you have translated, so there's extra trouble ahead.
That is why one should rewrite more ;-) The first version is often not very cleanly done as you're finding your way about. Then it can be used as a rough reference for the second version as now you are not focused on solving the immediate problem and can do some more looking ahead.
 
I'm following the conversation between you both, cracauer@ and bakul, and just want to say that I agree 100% with cracauer@

In my experience, there's little time to rewrite when there's much to do, and rewriting tends to add new bugs and restarts the production process altogether, so it's better to make the best initial decisions possible.
 
We are talking about scripts, not systems!

I don't think that makes a difference. We were talking about scripts that are growing into something less suitable for a scripting language.

I have never participated in a "rewrite or not" debate where the people in favor of the rewrite wouldn't want to sneak in some "obvious" improvements in the rewrite. Then you are right in the middle of the Second System Effect.
 
It's got to be shell; included within that would be tools like grep, sed and awk, and all the rest. That's it, that's the unix userland, right there. Remember there are various different shells; if you want general compatibility with linux so that your scripts will run on both operating systems, you can use bash, for example.. It's available on every unix system on the planet, without having to install anything else.

Then if you want something a bit more powerful, Perl (meaning Perl 5), which wraps up all the things you can do in sh, grep, sed, awk etc, and more, into one language. But if you use perl you introduce a dependency on it, so the base level is shell+tools. That's it. You can do a huge amount of work without ever going down to programming in C.

A very nice classic book on unix shell and tools programming is "unix power tools" by Tim O'Reilly et al.

I'm sure there are shell programming books specifically for freebsd out there too.
 
Last edited:
personally we think that any shell script longer than about 5 lines or uses builtins other than export and exec is doing too much in shell and should be written in something else. lately we've given up on scripting, even, and just end up building larger programs in OCaml.
 
The first version is often not very cleanly done as you're finding your way about. Then it can be used as a rough reference for the second version as now you are not focused on solving the immediate problem and can do some more looking ahead.
I think this is true - if the re-write is being done by the original team.
 
personally we think that any shell script longer than about 5 lines or uses builtins other than export and exec is doing too much in shell and should be written in something else. lately we've given up on scripting, even, and just end up building larger programs in OCaml.
I was always lazy to really learn and get 'proficiency' in sh, I know only the most basic things, if at all.

An error in my opinion, because it is omnipresent in a unix system, there are people that write real software with it.

The problem is that I have the feeling that it has a strange behavior, not coherent like tcl.
 
personally we think that any shell script longer than about 5 lines or uses builtins other than export and exec is doing too much in shell and should be written in something else. lately we've given up on scripting, even, and just end up building larger programs in OCaml.

Personally I would not draw such a hard line at 5 lines, but by principle I agree with you. I also wrote sh-scripts way over 2k lines myself, and recognized afterwards, those better be done in another [scripting]language.

Generally you are right, because by principle one always shall chose the tool suiting best for a certain job.
But there are additional points also need to be seen, tradeoffs need to be made:

➡️ Which tools/languages knows somebody?
Everybody can only work with the tools learned. And you can only judge and assign certain task to certain tools you know.
That's why everybody is well adviced to always keep an eye open for new things. One does not need to learn everything. Impossible anyway. It's way too much. But every computerist is adviced best to at least glimpse at anything new yet unknown, see if those may fit the personal portfolio, and even if not, at least keep them in mind (bookmark.)

➡️ There is always a tradeoff between learning and doing actual productive stuff.
Learning and maintaining are very important about computering, and better be not neglected. But doing actual productive stuff is the main plot. If you are being confronted with a new task, which would be done best with B, but for that you have to learn B first, but it also can be done with A, not quite as good but you are already well versed in A, you need to answer for yourselves: Is it worth the effort to learn B? Or better just solve it still in A? Especially when you consider: not knowing, if this will be the only task to be solved in B; are you working in a team; within a time schedule - you see this question can become really complex, needs effort itself just to be only answered, in which time you may already solved parts of it in A while you not even started on learning B yet.

➡️ Sometimes things simply grow.
I have some shell scripts once started as 5...10 liners. Then you add another line, or two "well, better respect this, too"..."with 3..4 more lines it can do this, too..." and suddenly your script is >300 lines.
Without any real need as long the thing works satisfactory you don't start to rewrite it by scratch in another language, just because that lang was more suitable for the job, only. (Which also produces new bugs.)

➡️ Learning to evaluate tradeoffs needs also be learned as learning to learn.
If your tradeoffs are badly judged you do the 12th project still in A, being annoyed not started on B sooner. Or you being annoyed to spent all that time to learn B which you never used again, which wasn't a great loss, if the time spent on it were not that much - with everything new you learn you always gain something useful.
The true power of Unix does not come by the tools themselves, but more from the combination of that tools - the more you know, the more power. But neither everything there is can be learned, nor in no time - again: tradeoff.
Learning to learn also brings better estimation on how long you need to learn something, at least what deep it's worth to dig in, or not, and how long it takes you.

My advice for the OP was to first get a copy of each:
Kochen, Wood, Shell Programming in Unix, Linux and OS X, Addison Wesley - (quick'n'easy entry)
Robbins, Beebe, Classic Shell Scripting, O'Reilly - (more comprehendable, way deeper)

You can get both for a few bucks at a used book store, can be read within a weekend, and brings a lot useful and valuable insights and knowledge.
Anyway take a glimps at all the other things mentioned here:
awk, sed, perl, tcl/tk,...
and then see for himself which of those, in which order to dig in deeper.
Especially awk and sed in combination with sh warp up your shell powers significantly, while
Aho, Kernighan, Weinberger, The AWK Programming Language, Addison Wesley, is quite a small book - not for everything you want to learn always dorstoppers are needed.
But IMO sh was first choice, primary task, core basics.
 
Personally, it depends on how many systems I think I might want it to run on and whether or not it's permissible to install anything before using it. I've personally come to like Bourne Shell scripting for basic things, like mounting my backup drive as I like to swap back and forth between a couple of them that are kept off site and there isn't a particularly clean way to do it otherwise without manually doing it. If you need a basic TUI bsddialog looks to work a lot like Linux's dialog does and can be quite handy if you don't want to memorize the information you need for a rarely used CLI utility.

For more complicated things, I tend to like Perl. It's personal preference, but it can be run with either dialog or Tk if I need a more sophisticated UI to go with it. I think the UI elements required only took a couple hours of working out, including the time I spent refreshing myself on Perl.

And regardless of what you're doing, learning how to use awk is quite helpful. There's a ton of times when awk, sed and grep chained together can get things in order for xargs, it's not always the most sophisticated thing, but it can be really helpful if you want a complete inventory of all the software on your computer along with the appropriate ports directory for it.

But, that's just me, I'm sure there's other ways to go about it.

EDIT: Keep in mind that if you're bothering to write a script it's worth it to use the long options when they apply, the worst thing is coming back to the script later on and trying to remember when the options you used even mean. Just use the long options that exist and it cuts down a bit on the mental load that comes during maintaining a script later on.
 
Nim , it's a python but then compiled.

Nim
  • Best for Scripting: Often cited as the most "productive" and pleasant for general tasks due to its Python-inspired syntax.
  • Why: It is highly versatile, featuring a garbage collector (by default) and a "batteries-included" standard library that handles high-level concepts like web, networking, and data processing much more easily than the others. It also compiles to JavaScript, making it viable for web-related scripting.
 
personally we think that any shell script longer than about 5 lines or uses builtins other than export and exec is doing too much in shell and should be written in something else. lately we've given up on scripting, even, and just end up building larger programs in OCaml.
Of course such a rule would disallow almost all of the freebsd system scripts(!), just have a look under /etc/rc.d some time... :)
In reality, there are a lot of good reasons for using the shell for many tasks, and you can do some pretty sophisticated things in it. If you're working on any kind of unix system, it's always worth investing your time to learn the shell properly.

Really the shell and the terminal emulator you're sitting at is what defines the unix user environment itself. All the other stuff, the graphical desktop, X11, wayland, gui apps, browser, etc, is all stuff that was added on top later. As you can see in this nice video from bell labs, still worth watching now; they explain many of the core principles of the operating system. Gotta love these cool dudes working with their feet up 😁
View: https://www.youtube.com/watch?v=tc4ROCJYbm0
 
Last edited:
Of course such a rule would disallow almost all of the freebsd system scripts(!), just have a look under /etc/rc.d some time... :) In reality, there are a lot of good reasons for using the shell for many tasks, and you can do some pretty sophisticated things in it. If you're working on any kind of unix system, it's always worth investing your time to learn the shell properly.

Really the shell and the terminal emulator you're sitting at is what defines the unix user environment itself. All the other stuff, the graphical desktop, X11, wayland, gui apps, browser, etc, is all stuff that was added on top later. As you can see in this nice video from bell labs, still worth watching now, it explains many of the basic principles of the operating system. Gotta love these cool dudes working with their feet up 😁
View: https://www.youtube.com/watch?v=tc4ROCJYbm0
Thanks so much for that - I'd been linked to it recently somewhere else but didn't have time to finish it and then lost track of it. That should be required viewing for all "unix-like" users.
 
POSIX shell only for simple stuff. Bash if you want lists and associative arrays.

But shell scripts spawn new processes and writing portables shell scripts is hard. It's better to use a proper scripting language like Python or Perl.
Sure, that's why Perl became so popular in the first place; it wrapped up all the features of tools like sed, awk, grep, regular expressions, together with data structures like lists and associative arrays, all into one more-or-less coherent language. So once your shell script or set of scripts grows to a certain level of complexity, something like Perl provides a better way to do it. Yes you can do associative arrays and regular expression matches in bash but the syntax is horrible. But I think limiting your shell scripts to just 5 lines is to miss the real value that you can get from shell scripts alone; and any serious unix shop developing software would be unlikely to have a rule like that.

I see Perl as a natural evolution of the unix philosophy as described by Brian K in that video... it started as a kind of super-awk, and in fact Larry Wall was working as a unix sysadm when he wrote perl. Larry wrote other well known tools as well, like patch.

Whereas python was designed as an academic teaching language to teach programming in univ. CS classes, and really comes from a different tradition (and continent). Personally I think Perl (perl 4 and 5, anyway) fits very nicely with the original design of unix (of course, it's not perfect, nothing is).

The main argument against using Perl (or python, etc) is that you introduce a dependency on Perl itself. You have to get Perl installed on your system and keep it current, or at some particular level, and pull down updates. Which may be inconvenient in many applications, like, say, a router or modem, where you just want to use what is available in the core operating system without having to track another external dependency.
 
Thanks so much for that - I'd been linked to it recently somewhere else but didn't have time to finish it and then lost track of it. That should be required viewing for all "unix-like" users.
Sometimes its worth listening to the guys who were the original designers of the system, they had a lot of very good ideas, as you can tell from the fact that we are still using them decades later. At the time unix had many ground-breaking concepts that weren't available in other operating systems, and which were copied by more recent operating sytsems like windows. In fact windows still doesn't have a single unified hierarchical filesystem (they still have 'drive letters' like C:, D: etc that dates back to ms-dos and earlier).
 
Example of a script in nim.

Code:
import std/[os, osproc, strutils]

let workDir = "build_assets"

# 1. Create the directory if it doesn't exist
if not dirExists(workDir):
  createDir(workDir)

# 2. Idiomatic way: Pass 'workingDir' directly to the command execution proc
# This is safer than changing the global state of your program
let (output, exitCode) = execCmdEx("df -h .", workingDir = workDir)

if exitCode == 0:
  echo "Disk usage in ", workDir, ":"
  echo output.strip()
else:
  echo "Failed to run command."

# 3. If you MUST change the directory for multiple operations, use 'discard'
# and do it on a separate line:
discard existsOrCreateDir(workDir) # A helper from std/os
setCurrentDir(workDir) 

# Now all subsequent relative paths refer to workDir
writeFile("log.txt", "Action performed.")
 
The main argument against using Perl (or python, etc) is that you introduce a dependency on Perl itself. You have to get Perl installed on your system and keep it current, or at some particular level, and pull down updates. Which may be inconvenient in many applications, like, say, a router or modem, where you just want to use what is available in the core operating system without having to track another external dependency.
My experience (others may disagree) with most embedded devices like routers or modems, is an upgrade image is all inclusive, storage is roughly divided into boot, config, image A and image B. If image A is currently running, an upgrade goes into image B storage. I know current things like ddwrt tend to break this a little bit, but look at upgrading your commericial NetGear device: go to mfg website, it pulls down "the update" and installs it.
The all inclusive aspect means that any updates to something like Perl will be part of that update.
 
Python wins hands down if you want something modern, it simply is one of the easiest languages to read and write. While it has many advanced features, you do not need to use them. Even if you come back months or years after you haven't written anything in Python you will probably understand what you have done. Not so with Perl. With Python you have all the means to write simple scripts, and also to grow to major systems, like testing/mocking, debugging, profiling, type hints etc.

The typical scripting languages like Python, Ruby, TCL and Perl have some problems in common:
  • Multithreaded code doesn't spread to multiple cores
  • No compile-time type checking
  • About 80 times slower than C, which together with using only a single core is pretty excessive
This is not necessarily true, there are quite some parts in Python that are using C-Libraries in the background. E.g using glob and iterating through a filesystem tree is nearly fast as it gets in C, while having much fewer lines of code.

Also note that Python has overall competent data structures, but they lump arrays and lists into one and do some magic behind the scenes. That makes it hard to explicitly force the data structure you know you need.
there are libraries like msgspec and pydantic if you want strict enforcement, but then, one of the big advantages of Python is actually NOT strong enforcement but duck typing.

Furthermore, the vast ecosystem is amazing - from AI libraries to web libs (also have rust based libraries that leaves nodejs in the dust)
 
Back
Top