Let me explain programming to you.
Lots of other people have tried.
Nothing should cause a panic unless it's unrecoverable.
Let me answer that with an old joke. A software test engineer walks into a bar. He orders one beer, drinks it, all good. He orders 99999999 beers. He orders -1 beers. He orders sqrt(2) beers. He orders -e^(i*pi) beers. He orders a "qwertyuiop". He orders an elefant. He doesn't get any beers, and leaves satisfied.
In the real world of software engineering, functions have an API. The purpose of the function is to serve the API, if possible. If it is not possible, then it gets complicated. The API is like a contract, and violating the contract is not funny. Not doing anything and pretending that the request worked is definitely wrong, because that means that broken software continues to exist. Returning an error indicator (like returning -1 and setting errno) may be appropriate. Throwing an exception may be appropriate (assuming we're in a language where exceptions can be raised and caught). Asserting or blowing up the program may be appropriate. Let's work through the example above, and let's assume that this bar only has one kind of beer, and one size of beer glass.
- Ordering 1 beer is certainly valid.
- Ordering 99999999 beers might be valid (if the software tester has many thirsty friends), but there is probably not enough beer in the pub to serve the request. So a good answer might be to set an error indicator (return -1), and set errno to ENOBEER = "This system is now out of beer".
- Ordering -1 beers indicates that the caller has a software error, because that request makes no sense. Personally I would assert, although raising an exception might also be reasonable, it depends on the programming culture used in this programming language and installation.
- For the remaining questions, we need to assume that the software uses dynamic typing, and the compiler allows ordering beer with arguments that are not integers (think Python as an example). I don't know what the answer to ordering sqrt(2) beers should be, it again depends on the programming culture. A possible answer might be: return the nearest number of integer beers, in this case 1. This has disadvantages: If the caller relies on either getting an error indicator or the correct number of beers, that assumption is now violated (he asked for ~1.414 beers, and got only 1). It may also have advantages: the caller gets some beer, and if after drinking it he's still thirsty, he can just order more; if he really wishes to drink 1.414 beers, order a second one, and leave a roughly half full glass behind when paying. Another possible answer might be to say: customers in a bar need to understand that beer is only served in integer numbers of glasses, and if they ask for a non-integral floating point number, we need to assert (throw them out of the bar). It really depends on the environment, the programming culture, coding rules, and the tightness of the contract that's implied in the API.
- Ordering -e^(i*pi) beers is even more interesting. And here the answer really depends. If the language has a good floating-point and complex number library, maybe it should work, and the tester will get exactly one beer. Meaning: If the requested amount is exactly an integer (or a floating point number that happens to be exactly integral, or a string that can be parsed as an integer), we can serve the request. This is the coding style in many scripting languages and rapid development frameworks. Another perfectly sensible implementation is to say: you can only get integer beers, not floating point or complex, and therefore assert. This would be typical in C-like languages, which tend to be very strict (except that this won't even compile in normal C or C++). Again, the answer depends.
- Ordering a string like "qwertyuiop" (in particular if the string doesn't parse as an integer) just makes no sense. The caller needs to get an assert. It's too bad this even compiled, but in interpreted dynamic typical languages this can't be prevented.
- Ordering an object of type elefant in a place that only serves beer is even more broken than ordering a string. Definitely assert, and perhaps call the programmer's manager to request that the programmer be sent to a psychiatrist, since he clearly has mental problems.
Notice: I never addressed what should happen if you order zero beers. It really depends on the implementation and the style. One possible answer is what the read() system call does: If you request 0 bytes, it does exactly nothing. This is sort of sensible, but also sort of problematic: Usually, a return value of 0 to read() means that the input file has reached EOF. After calling read(..., 0), do you know whether you've reached EOF? If I were asked to review some code from a colleague, and I found a case where they try to read 0 bytes, I would demand that they either take it out, or put a comment in there explaining exactly how they intend to handle the EOF case, because this is a bug waiting to happen.
A different example is the malloc() call. If you request zero bytes, what happens? The answer is: depends on implementation. Some implementations will honor the request, return a valid pointer, but that pointer is useless, you are not allowed to dereference it (because there is no accessible memory behind it). This is definitely a bug waiting to happen. Even worse, you have to eventually call free() on the returned pointer, otherwise you will leak memory (because malloc() returns not just the memory you requested, but also control structures). And some implementations will instead return a NULL pointer, which you clearly can not dereference. You can free() it, but you don't have to (see below). But here the same problem that happens with read() rears its ugly head: In theory, getting a NULL pointer back from malloc() means that your program just ran out of memory, and errno is set to ENOMEM. If you ask for zero bytes, you get a NULL. What exactly happens to errno? I have no idea, I don't remember the C standard saying either way. So this is a massive bug waiting to happen. If someone gave me code that tries to malloc zero bytes, I would first reject it in code review, and then I would walk over to their manager, and politely request that this programmer needs to go to some form of attitude adjustment (perhaps a coding class).
And now a different example: The free() call. It is perfectly valid and harmless to give a NULL pointer to free. This is a very sensible decision, because it allows for a programming style with fewer if statements:
Code:
void do_daily_feeding(...)
Elefant* pe = NULL;
if (... this zoo has an elefant ...) {
pe = (Elefant*) malloc(sizeof(Elefant));
if (!pe) {
... do something to handle being out of memory
}
... feed the elefant, bring it to the elefant bathroom, whatever
}
free(pe);
}
The nice thing is: Even if the if statement gets complicated, and even if the function has goto, as long as you get to the end, any allocated memory will be free'd. So there are cases where having function that do nothing if called with a zero argument makes perfect sense. There are cases where it is extremely bad. It depends.
Now, for the particular case of contigmalloc(), I don't know the correct answer. You could have a polite and informed discussion with kernel developers, and propose changing it. You might even succeed to convince them. Personally, my hunch is that you won't, and here's why: the programming style of the kernel has to be much tighter, and handling errors has to be pretty much guaranteed. Bad code needs to be weeded out. It also has to be very very efficient. Asking for zero bytes makes no sense, and it uses resources, so don't do it; if the caller doesn't know ahead of time, they can have an if statement:
Code:
void* px = NULL;
if (... we need any bytes ...)
px = contigmalloc(size, ...);
... do something with the memory;
contigfree(px, ...);
}
Furthermore, as I said above: asking for zero bytes may indicate a software bug (or it may not). In the kernel, "may be a software bug" is not acceptable.
oes malloc or any other memory allocation function crash and burn if there's no memory? No, it returns NULL.
Now you are confusing things. If there is no memory, malloc returns NULL. When you ask for zero bytes, malloc ... see above. Those two situations have little to do with each other.
And crash and burn is perfectly acceptable in some environments. One example is the new operator in C++. In theory, it is allowed to throw an exception when it is out of memory. But in normal usage (without nothrow arguments and without setting handlers), it is also allowed to do something else. It turns out that in the standard RedHat Linux environment, new will never throw an exception; instead, if you exhaust memory completely, the program will instead be aborted. This is considered acceptable, even good, by many people. To some extent, because on modern servers and desktop machines it is very unlikely that you run out of memory, and to some extent because catching exceptions absolutely everywhere is impractically difficult, and to some extent because there is just about nothing the program can do when it is this short of memory. In places where memory actually can be exhausted, people don't use new operators, but write specialized code that ties into the run-time library and the memory management system calls to handle things differently.
Here is by the way how the software test engineer joke ends: The next day, a customer walks into the bar, and asks where the restroom is. The bar explodes, killing everyone in sight.