C/C++, int32_t and typedefs

I'm writing a C++ program, and for me it's very important to have, for example, a signed integer that is exactly 32 bits. I believe that I can use int32_t (after importing some header). What is the exact import line (that I would write in my source code) that is the most cross-platform in order for me to use int32_t? Does every UNIX-like system have the int32_t type defined? Does the import line look different on different UNIX systems?

Also, I'm dealing with floating point numbers. I'm actually serializing their raw bits to a file, and the file has to work correctly across platforms. First, I need a floating point that is exactly 32 bits. Second, I need to get the raw bits, and serialize/de-serialize them to a 4 byte quantity in some manner that will be cross-platform. Here's what I have:

Code:
typedef union {
  float flt;
  char bits[];
} float_bits;

So I use the bits array to write out the individual bytes of the float. Now two things: I need a floating point number that's exactly 32 bits, and I need think about byte order (endianness or something?).

Right now I'm doing something like this:

Code:
float f = 0.182;
float_bits fltBits;
fltBits.flt = f;
for (int i = 0; i < 4; i++) {
  write(fltBits.bits[i]); // Actually "bits" should probably be renamed to "bytes".
}

I'm doing something analogous for the reading (de-serializing) part. Like I said, this has to be cross-platform.

Any ideas on some robust coding methods to do this sort of stuff? I'm also serializing integers by the way. I'm fairly new to C++ programming, but I have done 13 years of Java (please don't laugh, I know Java's not considered to be a "real" language by many of you hardcore folks).
 
Hi,
First you must determine ediannes on a your machine
for example:

Code:
#include <stdio.h>
unsigned short x = 1; /* 0x0001 */
int main(void)
{
  printf("%s\n", *((unsigned char *) &x) == 0 ? "big-endian" : "little-endian");
  return 0;
}


and when you will be serialize/de-serialize to a 4 byte quantity, you most to write first byte either the most significant byte or the least significant byte.
 
rambetter said:
(please don't laugh, I know Java's not considered to be a "real" language by many of you hardcore folks).

Some people don't think C++ is a "real" language either. :e

Re endianness:

Have a look at /usr/include/sys/endian.h for some useful definitions and macros.

Also see /usr/include/machine/endian.h, which is where FreeBSD defines BYTE_ORDER. There's no need to write a function to probe for this if you include either <sys/endian.h> or <sys/types.h> --- both of which in turn include machine/endian.h.

I'm not familiar with Linux, but they probably have a similar setup.

If you're going to be writing code that deals with these low-level issues, it will be worthwhile to spend some time getting to know the contents of the system header files. Just keep following the breadcrumbs until you can see how it all works.
 
Yes I will definitely try Boost in the future. For now, this is purely a learning experience, so I'd like to get to know how to do this sort of stuff from scratch.
 
rambetter said:
I'm writing a C++ program, and for me it's very important to have, for example, a signed integer that is exactly 32 bits. I believe that I can use int32_t (after importing some header). What is the exact import line (that I would write in my source code) that is the most cross-platform in order for me to use int32_t? Does every UNIX-like system have the int32_t type defined? Does the import line look different on different UNIX systems?

Code:
#include <stdint.h>

rambetter said:
Also, I'm dealing with floating point numbers. I'm actually serializing their raw bits to a file, and the file has to work correctly across platforms. First, I need a floating point that is exactly 32 bits. Second, I need to get the raw bits, and serialize/de-serialize them to a 4 byte quantity in some manner that will be cross-platform. Here's what I have:

Code:
typedef union {
  float flt;
  char bits[];
} float_bits;

So I use the bits array to write out the individual bytes of the float. Now two things: I need a floating point number that's exactly 32 bits, and I need think about byte order (endianness or something?).

Right now I'm doing something like this:

Code:
float f = 0.182;
float_bits fltBits;
fltBits.flt = f;
for (int i = 0; i < 4; i++) {
  write(fltBits.bits[i]); // Actually "bits" should probably be renamed to "bytes".
}

I'm doing something analogous for the reading (de-serializing) part. Like I said, this has to be cross-platform.

Any ideas on some robust coding methods to do this sort of stuff? I'm also serializing integers by the way. I'm fairly new to C++ programming, but I have done 13 years of Java (please don't laugh, I know Java's not considered to be a "real" language by many of you hardcore folks).

Why not just use htons(3) and ntohs(3) (see byteorder(3) for more details)?
 
Back
Top