It is much more complicated, and quite off-topic. In a Unix file system, you have to distinguish two things: files, and the names that point at files. Files can have zero or more names. Names point at exactly one file (except for short transition periods they point at zero or two, and during those periods nobody is allowed to look at them, locking guarantees that).
In the following, I will use the term "inode" to talk about files. That's not 100% correct, because there is more to a file than just the inode (which does not usually contain the data in the file), and some file systems don't actually use inodes at all, and sometimes inodes themselves are stored as record in files, but for simplicity I'll use the word inode. The inode contains all the attributes of a file except the name, for example ownership and permissions, mtime and atime and all that, and most importantly some way to find the data of the file on disk. It does *not* include the file name. Inodes are stored on disk. In the old days, inodes were exactly 512-bytes long (one sector), and were simply stored on disk, using the same space allocation method that files used when they needed a sector.
File names, on the other hand, are stored in directories. You can think of directories as objects in the file system that are just like files, except their content is specially structured: they always consist of directory entries. For simplicity, we can assume that a directory entry contains a name (relative to the directory it is in, so /home/fred/hello.c would have "hello.c"), a flag whether this entry is a file or a directory (or some other strange entity like a soft link or device or other things we'll ignore), and if it is a file, a way to find the inode. Most file systems use an inode number for that (not all do). You can really think of directories as nothing but special files containing lists of directory entries (early file systems implemented them that way).
Let's go through the life cycle of files to get a few examples. You open a file that already exists by name: You look up the name in the appropriate directory, find the inode number, look at the inode to do some permission checking, find the data, and read or write it. Boring. You create a new file that doesn't exist: you add a new directory entry to the appropriate directory, you create an empty inode and write it to disk, and then start writing (or reading after you have written something). So far it's easy. Notice that I have deliberately not talked about deleting a file!
The complexity happens because of two nasty things in unix file systems: hard links and temporary files. Let's do hard links first. A file already exists, and has exactly one name (the one you created above). You want to add a second hard link to it: You find the appropriate directory, add a new directory entry to it (with the correct name), and make it point to the same inode. So now you have two names (two directory entries) that point to the same inode (the same file)! That fact has to be recorded somewhere (you'll see in a moment why). That may sound confusing (and I personally hate it, life would be easier if there was a 1-to-1 correspondence between inodes and names), but it works. The important thing to remember: an inode (the body of the file) can have multiple names that point to it.
Let's do the first example of deleting a file. Let's take a file that has many names (many hard links): We can delete one of the names (with the unlink system call), and exactly nothing happens. Superficially, all unlink() does is to remove a directory entry that points to a file. This sounds like "delete" is broken on Unix file systems, and they will leak disk space, but we'll fix that later.
The crazy thing is creating a temporary file. That's done like this: You create a new file (new directory entry, new inode), and record that all. That new file now has one name. Then, while the file is still open, you delete the file (with the unlink system call). But the file is still open, so the inode can't go away, but the directory entry is wiped out. At this point an inode exists, but there are zero names pointing to it. You can read and write the file, but nobody else will ever be able to find this file again. Now, what prevents that inode from being harvested, overwritten, or destroyed? The fact that the file system remembers that 1 user process has it open. What do we learn from this? The file system has to keep track of how many names a file has, and whether it is open.
Now let's go completely crazy, and build the super-secret communication channel that nobody can find. Adam and Bob are two processes on a Unix system, and they want to communicate, without Eve else listening in (don't worry, the thing I'm about to build is actually totally insecure, it's intellectually no more complex than two cans and a string). Here's what they do: Adam creates a file, and then Bob opens it. Now the same file is open from two processes, and they can mutually see what each other have written, by reading. Once they know that communication is working, one of them unlinks it. Nobody else can find the file (without a name, you can't get to it), so Eve can't spy on them. The two of them chat for a while, then both close the file. (By the way, the names Adam/Bob/Eve are the standard names used in the cryptography literature; and since Eve has a brain (she's female after all), she could just have opened the file at the same time Bob did, and read everything the two idiots were talking about.)
So let's ask the question: Those temporary files, when can they be actually deleted? So here is the answer to how deleting a file really works: unlink'ing a file only removes a name to a file, as we said above. All inodes have to keep two counters: a counter of how many directory entries point at this inode (is usually 1, but as we saw above it could be 0 for a temporary file, and 2... for a file with multiple hard links), and a second counter of how many processes have this file open right now (this is zero for most files, and in fact once the file is open there is a data structure in kernel memory that describes the open state). Whenever a file is closed by a process, the kernel can check: if both counters (both the number of names and the number of open instances) have dropped to zero, it can actually wipe out the inode (in practice, they are typically not wiped out, just unallocated and the space becomes available for reuse).
So finally, I can explain how a normal rm
works: Superficially, it just unlink's a directory entry; but in the normal case (of a file that has a single name and is not open), that happens to also wipe out the inode, and in the process free all the data blocks on disk that are in use by the file. It is actually quite a complicated process. The important thing to remember is that the second part of that operation (wiping out the inode and freeing the data blocks) may not happen at the time of rm and unlink, but later. By the way, I don't want to discuss right now what happens if the computer crashes in the middle of any of these operations, but if you think about it just a little bit, you'll see why fsck exists.
You talked about shred'ing files and overwriting them three times with random bits. Some file systems can do that when the inode is wiped and data blocks freed. But the reality is much more complicated. For example, one can truncate a file to zero size, and some file systems even have the ability to punch holes (unallocated areas) into files. Should they be shred'ed or wiped such? Interesting policy question.
(By the way, if you enjoy complex puzzles: Go through the above logic for when the open count and the name count get incremented and decremented, but now with snapshots, and files that can be read from snapshots, and snapshots that can be deleted. There is a reason that snapshots were added to file systems only after the field had about 20 years of experience.)
In most cases, immediately after an inode is wiped (meaning usually just unallocated), the inode is still on disk without having been overwritten, and the data blocks are probably also still there. And the directory entry (in the directory itself) is typically also not completely removed, just flagged as "free, can be used for something else". So if you crash the machine immediately afterwards, and use an offline data recovery tool, you can probably find the inode and data, and perhaps even most of a directory entry for it. But at this point there are no consistency guarantees: the file system is heavily multi-threaded, and some other thread could have already started using the same disk blocks for something else. Matter-of-fact, modern file system do not actually implement all that stuff I explained above, because doing all these steps with real IO would be ludicrously slow. Instead, most of these operations happen in memory, with complex data structures and locking. With journaling, transactions, and soft updates, the actual writing of these things to disk happens before, after and instead of the simple outline I gave above. So now some tools come in (either offline tools or even worse online tools like lefsha described), and think they can look at the disk, and modify the state of file system internal structures, or write to disk, and most of the time chaos will happen (it may actually work some of the time). If undelete were easy, file systems would implement it. It is hard, and a lot of work. And the little tools that claim to do the work are usually unsafe, because they are not correctly integrated.