C man strncasecmp

The strcasecmp() and strncasecmp() functions compare the null-terminated
strings s1 and s2.

The strncasecmp() function compares at most len characters. The
strcasecmp_l() and strncasecmp_l() functions do the same as their non-
locale versions above, but take an explicit locale rather than using the
current locale.

I cannot imagine that for strncasecmp the strings must be '\0' terminated, but the man page is not precise.
 
In C all strings must be \0 terminated.

See man pages for bcmp() or memcmp(): the arguments are called byte strings, but they need not be '\0' terminated. Unfortunately the comparison they do is case sensitive.

I think also, C does not prescribe how strings are represented. It is an issue of the libraries.
 
I cannot imagine that for strncasecmp the strings must be '\0' terminated, but the man page is not precise.
If either string is shorter than len, the nul prevents comparing any junk beyond the end of the string. Requiring the nul allows len to be any value. If you know your text is longer, you can get away with omitting the nul but you may introduce a bug later if and when any of the parameters change.

I know some people hate to waste a whole byte when storing fixed length strings in a C struct.
 
I think the man page is perfectly precise. Let's ignore the ..._l functions and locales.

The strcasecmp compares only null-terminated strings. It will run through memory until it finds a zero. That's what the first paragraph in your quote says.

The strncasecmp will stop reading the string when it finds a zero, just like all str... functions. Again, that's what the first paragraph says. But it may also stop earlier, when reaching len characters. That's what the second paragraph says.

Now, does that mean that you have to null-terminate your strings? If you use ONLY strn... functions, and you carefully adjust all your strings to be long enough, or equivalently carefully adjust the len arguments to all the functions to be the real length of the strings, then you really don't need to put the zero characters into memory. I think such a design might work. I also think it would be insane, crazy, stupid, criminal. Or perhaps just not really idiomatic C, and not really that bad. But honestly, if someone wants to not waste the space for the zero, and also make string processing more efficient, they should stop using the C-style string libraries, and use something better. There are oodles of libraries that make string a first-class citizen (not just an array of bytes or characters that's terminated with in-band terminators, which can occur at any byte offset which makes processing on modern hardware inefficient). My favorite way of doing it is with string objects that are immutable and store their length explicitly in an integer counter in the object. But to each its own ...
 
Tracking string length can be a burden. Some folks think (s)printf requires a nul. It does not.

const char text[] = "eighteencharacters or more";

printf(%*.*s\n", 18, 18, text);
 
Absolutely. If you are mostly doing C-style strings, and one function in the middle needs counting, then things like strn... are a good way to handle the rare exception. If you are doing mass production of handling strings (like implementing a SQL database with query compiler), then C-strings are a tough way to go, and you're better off using a dedicated library that is easier, safer, and more efficient.
 
Well, what does the real reference say. SUSv4 xopen posix or whatever speaks of strings and possibly null-terminated arrays. It also defines a string as a contiguous byte array terminated by null.

So clearly null-termination is optional for the 'n' variants. I think that the FreeBSD man page should also say something to that effect.
 
This quickly goes into the area of defensive coding. Example: always using the "n" string functions with a sane upper bound to prevent buffer overflows. That's beyond the scope of a single man page.
 
This quickly goes into the area of defensive coding. Example: always using the "n" string functions with a sane upper bound to prevent buffer overflows. That's beyond the scope of a single man page.

Yes, but beware of the law of unintended consequences. By using 'n' variants you are removing problems related to buffer overflows. In exchange you get problems with handling the truncated result. The strncat man page goes into details of this.

If you are writing multi-platform dependent code, then the 'n' variants can also be a problem, for instance with snprintf.
 
the 'n' variants can also be a problem, for instance with snprintf.
The printf example I gave above covers this. You prevent the output from growing too large by controlling the inputs. If you give me an snprintf that truncates, I can show you an sprintf that *safely* truncates because I can control where the truncation(s) occur (and there's nothing wrong with combining the two).

Not sure if any of this is still useful to the OP.
 
Not sure if any of this is still useful to the OP.

Thanks. I solved the problem as wrote above. One line inside the source, for not writing a separated function with only a line, what only has sense if I do not need many times the same line. I miss in C something like call/ret, light weight subroutines inside function definitions for trivial things like this. There is also not an indirect goto.

About "printf(%*.*s\n", 18, 18, text);", I read in print man page the following description of s conversion specifier:

The char * argument is expected to be a pointer to an array
of character type (pointer to a string). Characters from the
array are written up to (but not including) a terminating NUL
character; if a precision is specified, no more than the
number specified are written. If a precision is given, no
null character need be present; if the precision is not
specified, or is greater than the size of the array, the
array must contain a terminating NUL character.
 
Back
Top