[OT] is 64 = 32 times 2?

Taling with a colleague of mine a doubt arise: is 64 bit computer exactly the double of 32 bit computing? I don't think so, but my colleague states that having 64 bit instructions means that almost every program will occupy double space in on-disk, memory footprint and will produce much more cache miss.
My opinion is that being many instruction 32 bit compatible they can be compressed either on disk and ram and that cache should have been adjusted to reflect the new instruction format. Any comment?
 
Compile the same source code in i386 and amd64 and compare the sizes of the executables and the memory footprints, you'll be surprised to find out that they will be quite closely matched instead of the claimed 2:1 ratio.
 
64bit usually refers to the size of the registers in the CPU and thus the common low level data types such as 64bit integer numbers. It also refers to the width of the data bus, although there are exceptions. There is little or no connection to the size of executable code since that is mostly defined by the length of words in the instruction set.
The wikipedia page "64bit computing" explains it quite well, although it does not directly address your question.
 
A 64 bit processor isn't inherently faster than a 32 bit. This is especially true for current processors that can run both. Both instruction sets are executed directly on the CPU, there's no translation. That means that an ADD instruction, for example, takes as much time on 64 bit as on 32 bit.

However, 64 bit does have larger registers and could hold more data. In certain situations this could speed up things. Most of the times however it will not. Keep in mind that the external databus is 128 bit and has been since the introduction of the Pentium processors.
 
fluca1978 said:
Taling with a colleague of mine a doubt arise: is 64 bit computer exactly the double of 32 bit computing? I don't think so, but my colleague states that having 64 bit instructions means that almost every program will occupy double space in on-disk, memory footprint and will produce much more cache miss.
My opinion is that being many instruction 32 bit compatible they can be compressed either on disk and ram and that cache should have been adjusted to reflect the new instruction format. Any comment?

It's interesting as you used the term double in the sense of "twice" or 2*n. I for a moment thought you meant double as in floating point precision.

Maybe your thinking of signed vs unsigned ranges which are relative to real numbers by taking up the same space with zero not at the minimum value point but closer to a median point. Either way they take up the same space where signed integers are used for integers which require negative representation. (i.e. 3-pole switch).

The sizeof[1] integer has a larger maximum from 32 to 64 bits.

unsigned int max value on 32-bit is:
0 to 4,294,967,295

unsigned int max value on 64-bit is:
0 to 18,446,744,073,709,551,615

Bits (i.e. Binary Units) are based on two's compliment[2]. It can't be twice the value but closer to an exponent. Not faster either but with higher value int but more like a higher resolution where applicable.


[1] https://en.wikipedia.org/wiki/Sizeof
[2]https://en.wikipedia.org/wiki/Two's_complement

Control is given to the programmer in how much memory is used by declaring primitive types such as char, int, double, float. Signed and unsigned define range while short, long, long long control how large or small the byte value is at compile time. [3]

[3] https://en.wikipedia.org/wiki/C_data_types#Basic_types

I hope you find this information useful. Here is another link :stud:
https://en.wikipedia.org/wiki/Integer_(computer_science)#Common_long_integer_sizes
 
Remember that on the 64-bit architecture pointers are twice as large. For most programs this doesn't matter much, but for programs that use lots of pointers (e.g. a LISP system) this could have a big size impact approaching doubling the size of the data.
 
You have to keep in mind that there are quite a few 32/64 bit architectures and that they are quite very different. You have to be more specific when comparing.

If you mean x86_64 vs x86, there is very little difference since x86_64 is an instruction extension to x86. Major difference is physical/virtual adress space and 8 extra registers. x86_64 physical cache is tuned accordingly by manufacturers and there should be no difference in cache miss rate. Size of binary executable will depend on compiler, but for C/C++ (using clang/gcc) due to compile time optimizations there will be almost no difference in size.
 
SirDice said:
A 64 bit processor isn't inherently faster than a 32 bit. This is especially true for current processors that can run both. Both instruction sets are executed directly on the CPU, there's no translation. That means that an ADD instruction, for example, takes as much time on 64 bit as on 32 bit.

However, 64 bit does have larger registers and could hold more data. In certain situations this could speed up things. Most of the times however it will not. Keep in mind that the external databus is 128 bit and has been since the introduction of the Pentium processors.

If you are comparing x86 vs x86_64 performance, only major difference is 64bit integer and floating point arithmetics, that will be significantly faster on x86_64 in combination with SSE extensions, but that is mostly significant in 3D graphics and scientific math problems and its done on GPUs anyways (like it should be done :P).
 
Back
Top