1) The compiler does implicit conversions for you based upon its rules of type promotion. Since unsigned char is integral, the only candidates are abs( int ) and abs( long ). In this case the compiler will choose int. Therefore, the unsigned char will be implicitly cast to int, then passed to the function. The return type is int, which you then probably store in something other than int, which means another implicit cast is done from int back to your type.
2) I am a proponent of using the "right" size (type). For example**, the number of elements in a list is an
unsigned type. It is nonsensical to think of a list containing -3 elements. Using a signed type just makes you write extra if() statements:
1 2 3 4
|
if( numElements < 0 )
cout << "Whoa, list is even less than empty!" << endl;
else if( numElements > MAX_SUPPORTED )
cout << "List too big" << endl;
| |
Whether I use unsigned char, unsigned short, unsigned int, or unsigned long(!) is a matter of how many elements will be stored in the list. If the list is a list of players on the basketball team, probably there will never be
more than 255, so unsigned char works.
IMHO the performance difference is negligible between char and int. [It is worth noting that it is not necessarily faster to access an int than a char, particularly if the int is misaligned]. Only in the 0.0001% of all applications where every ounce of speed is required would I attempt to resort to such micro-optimizations. (And the compiler in most cases will be able to optimize the code better than a human anyway).
And lastly, code that has a lot of
explicit typecasting is poorly designed. Code that uses a lot of
implicit typecasting is stupidly written. [Ignorance may be bliss, but it can lead to problems that are hard to track down.]
** Ok, yes, yes, yes, I know. When using STL containers, I don't need to store the size in most cases (size() is O(n) on std::list...). And if I did, I'd be using container::size_type anyway.