Compiler option -Os optimisation level

kavitakr

New Member

Reaction score: 1
Messages: 19

HI,
We were used to compile some C code in FreeBSD 10.4 with -Os optimisation level with below clang version.
FreeBSD clang version 3.4.1

We have now migrated to FreeBSD 11.2 , when we use the same option , we see compilation is taking almost 1.5 hours extra.
FreeBSD clang version 6.0.0 (tags/RELEASE_600/final 326565)

Any pointers what can be checked?
 

SirDice

Administrator
Staff member
Administrator
Moderator

Reaction score: 10,592
Messages: 36,224

Keep in mind that 11.2 is also EoL. I would suggest upgrading to 12.2 as the entire 11 branch will be EoL soon (probably some time after the release of 13.0).
 
D

Deleted member 66267

Guest


HI,
We were used to compile some C code in FreeBSD 10.4 with -Os optimisation level with below clang version.
FreeBSD clang version 3.4.1

We have now migrated to FreeBSD 11.2 , when we use the same option , we see compilation is taking almost 1.5 hours extra.
FreeBSD clang version 6.0.0 (tags/RELEASE_600/final 326565)

Any pointers what can be checked?
Could you run freebsd-update to get the latest 11.4-p5 version?

IMHO, the more modern the compiler is, the longer it took to compile things and the more bloated the binary size is.

But 1.5 hours extra time is too much. Could you try to compile without -Os to see if it took the same amount of time?

BTW, I don't like -Os. I found -O2 and later strip --strip-all give me a smaller binary, but it's just my own opinion, I'm not expert in this field, though.
 

debguy

Active Member

Reaction score: 23
Messages: 225

I use -O0 always. for linux (i am new to freebsd) I am 100% certain gcc emits incorrect ASM code causing good applications to fail and the main cause is -O2, however gcc can mess up even -O0.

when you read optimizing code you see phrases like "this shouldn't fail because" and "this is a good guess in almost every case because" and that's why it doesn't work. works 100% of time or don't do it is not a language optimizers have learned yet. And clang, well, i'm having to edit code that was good for decades on most C compilers because "someone decided it was diminshed language"

whatever don't listen to me
 

mark_j

Aspiring Daemon

Reaction score: 484
Messages: 883

HI,
We were used to compile some C code in FreeBSD 10.4 with -Os optimisation level with below clang version.
FreeBSD clang version 3.4.1

We have now migrated to FreeBSD 11.2 , when we use the same option , we see compilation is taking almost 1.5 hours extra.
FreeBSD clang version 6.0.0 (tags/RELEASE_600/final 326565)

Any pointers what can be checked?
You're being too vague, I'm afraid.:confused:
Maybe post information on your system's memory and cpu count?
It seems, at a wild guess, like there's swapping going on causing a huge delay.
What's the particular issue? With one C unit file?
 

ralphbsz

Son of Beastie

Reaction score: 1,975
Messages: 2,936

I use -O0 always. for linux (i am new to freebsd) I am 100% certain gcc emits incorrect ASM code causing good applications to fail and the main cause is -O2, however gcc can mess up even -O0.

when you read optimizing code you see phrases like "this shouldn't fail because" and "this is a good guess in almost every case because" and that's why it doesn't work. works 100% of time or don't do it is not a language optimizers have learned yet. And clang, well, i'm having to edit code that was good for decades on most C compilers because "someone decided it was diminshed language"

whatever don't listen to me
I would say that any code that breaks when optimization is turned on is probably already broken. Try compiling it with -Wall, or run over it with a good linter, and you will probably find warnings telling you what's wrong.

Actual compiler bugs (even in optimizing mode) where invalid code is generated are exceedingly rare. In my 25-year career of getting paid for writing software, I've seen two. One was more of a kernel/run time library bug: The very first floating point operation after the program started would give invalid results (due to the handler for emulating floating point not having been initialized correctly). That was was super hard to catch. The second one was in gcc, with extremely complex code that involved a 100-line arithmetic expression that had lots of shift operations and xors (it was a cryptographic checksum) always returning the result zero, because the compiler didn't notice that it had run out of intermediate registers. Strangely, our code worked perfectly, but was very slow: What was being calculated was used as a hash code, and the hash table simply put all entries into the hash collision list for bucket zero, turning an O(1) data structure into an O(n) data structure. Oops.
 
OP
kavitakr

kavitakr

New Member

Reaction score: 1
Messages: 19

Could you run freebsd-update to get the latest 11.4-p5 version?

IMHO, the more modern the compiler is, the longer it took to compile things and the more bloated the binary size is.

But 1.5 hours extra time is too much. Could you try to compile without -Os to see if it took the same amount of time?

BTW, I don't like -Os. I found -O2 and later strip --strip-all give me a smaller binary, but it's just my own opinion, I'm not expert in this field, though.
We have product with BSD 11.2 as of now, we need to migrate to BSD 12.2 STABLE , thats in pipeline.I did try with -O2 , still same issue.
 
OP
kavitakr

kavitakr

New Member

Reaction score: 1
Messages: 19

You're being too vague, I'm afraid.:confused:
Maybe post information on your system's memory and cpu count?
It seems, at a wild guess, like there's swapping going on causing a huge delay.
What's the particular issue? With one C unit file?
We are using the same build server there is no change on system memory or cup count.
swapping is not an issue. Its not some one line. we have some legacy proxy c code base.
 

mark_j

Aspiring Daemon

Reaction score: 484
Messages: 883

We are using the same build server there is no change on system memory or cup count.
swapping is not an issue. Its not some one line. we have some legacy proxy c code base.
You do know that when you upgrade an OS things change, some dramatically? Just because it's the same box doesn't mean the same result (though granted on face value this reported blow-out in time seems very odd). Different defaults etc.
Anyway, it seems you think it can only be clang. So upgrade the version and see if it fixes it. Oh wait, you're running 11.2.. eek, that's old.
 

Snurg

Daemon

Reaction score: 572
Messages: 1,349

I would say that any code that breaks when optimization is turned on is probably already broken. Try compiling it with -Wall, or run over it with a good linter, and you will probably find warnings telling you what's wrong.

Actual compiler bugs (even in optimizing mode) where invalid code is generated are exceedingly rare. In my 25-year career of getting paid for writing software, I've seen two.
I think some of you have experienced things that seem supernatural.
Strange behavior of generated code, for example, that varies depending on small changes. For example, {a=b;c=d;} works but with {c=d;a=b;} the behavior is completely different.
These bugs are extremely difficult to catch and reproduce.
More than 30 years ago, I had such a thing and I ended up sending M$ a bug report via snail mail with a short, super simple code snippet that reproducibly produced very different, obviously incorrect machine code depending on the ordering of some instructions when using -Os. With the next MSC update we got some months later, the produced assembly was okay.

I can only say that in this case, I had several colleagues review the source code and nobody could find anything incorrect, and all considered the output of the -Os option as functionally different from the source code.
The issue was aggravated by the fact that the bug depended on some particular ordering of instructions, so some simple changes that had no effect on the functional outcome could make the incorrect code generation disappear.

So we had to "work around" by sticking with the order of instructions that did not produce incorrect assembly.

My personal guess is, such things might happen more frequently, but stay undiscovered because when the code gets changed during debugging, and the error disappears for some unexplainable reason and cannot be reproduced anymore, people will soon stop investigating and move on.
Especially if one, not considering the possibility of incorrectly generated machine code, does not carefully examine the generated machine code in deep detail to understand what is happening.
 
Top