Mobo01 wrote: |
---|
I've read that the size of data types like int might differ between systems. |
char is at least 8 bits but is as good as always exactly 8 bits.
short and
int are at least 16 bits but on modern computers I don't think you'll find anything other than
short being exactly 16 bits and
int being exactly 32 bits. You'll have to go back to the time of 16-bit computers to find 16-bit
ints.
long is at least 32 bits. It's usually either 32 or 64 bits.
long long is at least 64 bits and I doubt you'll find it being larger than that anywhere.
Mobo01 wrote: |
---|
My first question is, can someone provide an example of what goes wrong when a program expects an int is 4 bytes but it is only 2 bytes on another platform? |
If you write your code with the assumption that
int is 32 bits (and can store values up to 2147483647) then you might run into problems if you recompile the same program on a platform where
int is only 16 bits (and can only store values up to 32767). It could easily lead to integer overflows and result in the wrong values and
undefined behaviour.
Example:
1 2 3 4
|
int a = 27000;
int b = 15000;
int avg = (a + b) / 2;
std::cout << avg << "\n";
| |
This code should print 21000 but if
int is 16 bits the expression (a + b) will "overflow" because the sum 42000 doesn't fit in a signed 16-bit integer.
It can also be a problem if you transfer binary data between programs running on different platforms (e.g. by using files or through an internet connection) and don't make sure to use the same number of bits at both the sender and receiver end.
Mobo01 wrote: |
---|
I am interested how one may explicitly guarantee that some type is always, say, 32 bits regardless of platform? |
If you want to support all platforms (existing and non-existing) that the C++ standard supports then the simple answer is that you can't. The C++ standard is compatible with platforms where for example the smallest addressable unit is 16-bits which forces char to be
16 bits. In that case there would be no way to represent a 8-bit integer.
If you look at the specification for the <cstdlib> header you'll see that the standard says that the fixed-width integer types are
optional, i.e. they do not need to be available on platforms where they are not supported. Other "platform-independent" libraries that come with such fixed-size integer typedefs often just assume that they can be supported because they know that all platforms that they care about supports them.
If you are very paranoid and want to support platforms that does not use 8-bit bytes, or you know you're working with some specialized hardware that use unusual sizes, then you can use a larger integer type if necessary. <cstdint> provides std::int_least8_t, std::int_least16_t, etc. These are not optional. You would then have to write the code in such a way that it doesn't assume the integers to be of a certain size. You can for example no longer rely on the wraparound behaviour of unsigned integer types to work the same as with a smaller fixed size integer type without doing additional masking.
Normally I think it's fine to just assume you can use the fixed-sized integer types. I don't see the advantage of using anything other than the ones in <cstdint> now that it has been standardized. The reason why so many libraries use their own typedefs probably has a lot to do with the fact that it was not standardized in C++ until C++11. In C it was standardized a bit earlier, in C99, but adoption has been slow in the past and many C libraries wanted (perhaps still wants?) to support older versions of the standard.