Useless Scripts/Programs

One from me. This is a particularly pointless wrapper script to fill a file (or files) with random data of the same size as the original(s). You might just as well have called dd directly as ran this script. And why would you want to overwrite a file (or even an exceedingly humungous large number of files) with random data anyway? It does all seem rather pointless and altogether a bit silly, or even a little frivilous.

Perl:
#!/usr/bin/perl
#
use warnings;
use strict;
 
use feature ':5.10';
 
sub blat {
    my $fname = shift;
    say $fname;
    my $fsize = -s $fname;
 
    `dd if=/dev/urandom of=$fname bs=$fsize count=1 conv=notrunc 2>&1 >/dev/null`;
}
 
if (not defined @ARGV) {
    say 'blat.pl <list of files to blat>';
} else {
    foreach (@ARGV) {
        blat $_;
    }
}

I suppose it might even verge on being 'useful', under certain circumstances. Of course, you could always re-write it in 'brainfuck', just to be sure.
 
Last edited:
When I came to Unix my programming ability was ahead of my ability to appreciate the existing tools and libraries.

I wrote many a naive replacement for existing stuff, usually crippled versions.
i wish it was the same for me, would learn a lot that way

I always liked this page from 'muppetlabs'. Not my own work, but well worth a look. How to make the tiniest possible elf executables. I'm sure they can be ported to freebsd...
ohh i remember that one, i remember being obsessed with trying to make super small executables back when i was learning x86 assembly

This is the most useless code ever:
Code:
:rolleyes:
best part is how you don't even need to change anything to port it to literally any programming language in existence!

why would you want to overwrite a file with random data anyway?
it's actually great for making viruses, if you have access to /dev/mem you could really accomplish some fun stuff with that, or heck even overwrite the entire kernel image plus some critical binaries like init to completely brick one's system

I suppose it might even be useful, under certain circumstances...
so yeah you're right about that
 
Here is shortest almost useless program.

Code:
IEFBR14    CSECT
           XR        15,15            ZERO RETURN CODE
           BR        14               RETURN

What does it do? Let's translate this to C:

C:
int
main()
{
    return (0);
}

Pretty useless, eh?

In the UNIX world, it is useless. In the mainframe world, when using JCL , it's a NULL program that does nothing so that one can use JCL to dispose of datasets (files in UNIX parlance). IEFBR14 is a real thing.
One of the funny things about IEFBR14 is that early versions had a bug. The early version was a single line of assembly code, a single instruction, and yet it managed to be broken. The bug was that it didn't zero the return code, so once JCL learned to do conditional execution of other job steps (sort of like an if statement in a Unix shell), IEFBR14's behavior became unpredictable. So they had to add the second instruction to it, shown above.

Is there any programming language in which a zero-line program is useful?
 
One from me. This is a particularly pointless wrapper script to fill a file with random data of the same size. You might just as well just have called dd directly as ran this script. And why would you want to overwrite a file with random data anyway? It does all seem rather pointless and a bit silly.

Although... it would be kind of satisfying to say something like:-
$ blat secret-email-to-prime-minister.doc
there's a utility common on linux systems that does this, shred, but the utility of such a tool only exists in spinning-disk filesystems that naïvely map files to blocks on disk; it would be useless on a copy-on-write system like ZFS or with an SSD wherein the controller does wear leveling.

also we have to nitpick the perl style, using backticks in a void context is unnecessary, that should be system().
 
there's a utility common on linux systems that does this, shred, but the utility of such a tool only exists in spinning-disk filesystems that naïvely map files to blocks on disk; it would be useless on a copy-on-write system like ZFS or with an SSD wherein the controller does wear leveling.

also we have to nitpick the perl style, using backticks in a void context is unnecessary, that should be system().
As a matter of style, it's kind of needlessly verbose, too. With a little more effort it could be turned into a nice, compact 1-liner.
 
One of the funny things about IEFBR14 is that early versions had a bug. The early version was a single line of assembly code, a single instruction, and yet it managed to be broken. The bug was that it didn't zero the return code, so once JCL learned to do conditional execution of other job steps (sort of like an if statement in a Unix shell), IEFBR14's behavior became unpredictable. So they had to add the second instruction to it, shown above.

Is there any programming language in which a zero-line program is useful?
Do you count a NOP as a line of code?
 
When you get done a refactor of the entire codebase because you understood the project needed an even better algorithm-driven or data-driven structure, suddenly big blocks of complex code become useless.
 
maybe it wouldn't for modern CPUs that'd run thru those instructions lightning-fast, but for older ones it'd surely matter, either way the CPU is executing code, even if it does nothing whatsoever
 
On pre-superscalar cpu's, the exact number of clocks a NOP will consme is known and fixed, hence for a given clock rate the exact execution time is known. What happens in superscalar.. with out-of-order execution, multiple parallel internal execution units, probably depends on the particular architecture. You would use a timer to generate delays in modern architectures.
 
On pre-superscalar cpu's, the exact number of clocks a NOP will consme is known and fixed, hence for a given clock rate the exact execution time is known. What happens in superscalar.. with out-of-order execution, multiple parallel internal execution units, probably depends on the particular architecture. You would use a timer to generate delays in modern architectures.
on modern CPUs a lot of instructions don't even execute-as-such: nops get decoded and evaporate from the pipeline, register clears cause a new physical register to be allocated, etc.
 
Yeah, instruction re-ordering or "out of order execution" is another thing they do, there are all kinds of tricks. Unfortunately it makes it harder to write the kind of deterministic assembler we used to write on pre-superscalar processors, if that is even possible on mainstream cpus now. You have to trust the compiler. Of course you can still write in assembler and it will execute, but it may not run in realtime in the exact order that you think it's going to run (or that you wrote in your program).
 
It causes a lot of problems too, perhaps the most famous being the spectre and meltdown bugs that hit intel systems maybe 10 years ago, which exploited another feature called speculative execution. They do a lot of optimisation and on-chip parallelism (eg, 'hyperthreading') to get higher performance, but that creates opportunities to hack the chips in ways that didn't exist in the past. So if you look at freebsd or linux there are a whole series of mitigations for security exposures in modern cpus.

There was a thread recently discussing spectre/meltdown mitigation in freebsd.
 

Generally today you will write in C (or a higher level language that is itself written in C) and let the compiler toolchain take care of all the details. You would only drop down to asm to do specific bits of work, say if you want to exploit some opcodes in the cpu that the compiler doesn't support, the classic example used to be the streaming multimedia extensions, although perhaps compilers have got round to supporting those now, I haven't checked on that for a while. Or another thing you might do a little bit of inline asm for would be to read the on-chip timer register (rtdsc) , for example, or if you wanted to write yourself a fast spinlock.
 
Back
Top