Casting to Enum Gets Negative Value

I'm having an issue when retrieving a numeric value from a message and attemping to cast it to the appropriate value in an enum I have declare--the value always comes out as the negative, but I'm not sure how the FFFF is getting padded to the beginning of the enum value. I have declared the enum this way:

1
2
3
4
5
enum MyType
{
   OneType = 0x8001,
   AnotherType = 0x0304
};


The message comes as a pointer that I an incrementing along, and the first 16-bits are the type that should correspond to this enum, so I cast it as follows:

MyType type = static_cast<MyType>(msgPtr[0]);

where msgPtr is a declared as unsigned int*. The value at msgPtr[0] is 0x8001 (32769), but the value stored in type becomes 0xFFFFFFFFFFFF8001 (-32767) as observed in my watch window.

I've never run into a situation like this before, so I'm assuming there must be a detail about either casting or enumerative declaration that I'm not understanding. Suggestions?
Unfortunately you are running into a problem that I have had in the past. This is called sign extension. Since the enum is an integral value that is evidently stored as 32 bits and the input value has the MSB set you are seeing the left most 16 bits being filled with the sign bit. The enum is typically a signed integral type. I don't believe that there is a way to make this work via a cast. You'll probably have to memcpy the values from the message into a zero initialized 32 bit integer first. Then you'll have to construct the enum from the int.

Also what is msgPtr pointing to anyway? You want to be careful with this type of programming. I assume you have an array of words or something representing a block of data. You could write a message handling class for the specific message type that contains the message structure as an attribute. Then you can write a serialize function that extracts the data in a way that is more typesafe. The problem is that you don't really have any guarantee about the underlying type of an enum. The standard allows it to be implementation defined therefore a direct cast from an integer may not always be a safe thing to do.
Last edited on
What is different about your code? This gives the expected answer:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#include <iostream>

enum test_t
{
    a = 0x8001,
    b = 0x0304
};

int main()
{
    int array[] = {0x8001};
    test_t x = static_cast<test_t>(array[0]);
    std::cout << "static_cast: " << x << std::endl;
    return 0;
}

@kempofighter: msgPtr is pointing to a word stream that I am receiving, per requirments (gotta love 'em). I realize this is not the most typesafe way to do it, but creating a class wrapper is slightly more processing time and overhead, and I can't afford that. The unusual thing is that I believe I tried what you are suggesting. I tried reading the value directly from memory into an unsigned int, and then casting the unsigned int to the Type enum. In the watch window, the value is read properly from memory to the unsigned int, but when cast ... It still goes negative. To clarify, this still produces the same negative cast result:

1
2
unsigned int typeInt = msgPtr[0];
MyType type = static_cast<MyType>(typeInt);


where typeInt IS the right number (positive), but when it is cast to MyType, it goes back to -32767 for some reason.

Was there a more rigorous way you "padded with zeros" to force the dropping of that sign bit?

@PanGalactic: The difference between your code is that you are defining an array and specifying the desired value in the array (thus storing that value into memory), but I am reading via a pointer--essentially the same thing, but somewhere between your array definition (which is working) and my storage of the value directly into memory, the sign bit is getting lost in translation.

Thanks for your help in this matter.
Last edited on
But what is the exact type of the msg? I seem to remember that it had something to do with that. Typically you have a struct that is an array of values (int, char, unsigned int, or something like that). Later you memcpy into the array and then you want to convert the raw data into a value. I'm surprised that copying the data into a 32 bit SIGNED int doesn't work for you. I see that you are still trying to write into an unsigned int as an intermediate value.
I duplicated the problem using this example. I now remember that it had to do with bit fields for me. Look at the assembly. the compiler is doing a sign extension during the extraction from the bit field of the struct instance.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
enum test_t
{
    a = 0x8001,
    b = 0x0304
};

struct MsgType
{
	int value1 : 16;
	int value2 : 16;
};

int main()
{
    MsgType msg = { 0x8001 };
    test_t x = static_cast<test_t>(msg.value1);
    std::cout << "static_cast: " << x << std::endl;
    return 0;
}


I am using VS2005 but have seen this with other compilers as well. Here is the problem in assembly. Notice that during the extraction it shifts left and then shifts right using algebraic right shift which extends the sign. I am surprised that it did this even for unsigned types.
1
2
3
4
5
temp = msg.value1;
004140B0  mov         eax,dword ptr [msg] 
004140B3  shl         eax,10h 
004140B6  sar         eax,10h 
004140B9  mov         dword ptr [temp],eax 
Last edited on
EDIT: In my original test I had a bug in my code so I was mistaken. I had to rework this post completely. I can get the temp value solution to work in VS if the struct contains unsigned int types. However the temp value solution works in other compilers even if the underlying types are int. Unfortunately I do not know how to explain to you the best and most portable solution since I do not know if your problem is similar to this.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
enum test_t
{
    a = 0x8001,
    b = 0x0304
};

struct MsgType
{
	int value1 : 16;
	int value2 : 16;
};

struct MsgTypeUnsigned
{
	unsigned int  value1 : 16;
	unsigned int  value2 : 16;
};

int main()
{

	// This does seem to work because all underlying types are unsigned
	MsgTypeUnsigned msgUS = { 0x8001 };
	unsigned int temp = msgUS.value1;
    test_t y = static_cast<test_t>(temp);
    std::cout << "static_cast: " << y << std::endl;

	

	// This does not seem to work because the underlying type of struct is signed.  However
	// the adamulti compiler makes this work.  This seems to be a visual studio 2005 issue.
	MsgType msg = { 0x8001 };
	unsigned int temp2 = msg.value1;
    test_t x= static_cast<test_t>(temp2);
    std::cout << "static_cast: " << x << std::endl;

    return 0;
}


The assembly code that I see with non-VS compilers looks more like this where it simply moves the data into the new variable and then clears the upper half (even when I do the direct static_cast).
1
2
0x40794a  main+0x4e: 	8b c3                         movl      %ebx <msg>,%eax <y>
0x40794c  main+0x50: 	25 ff ff 00 00                andl      $0xffff,%eax


Another possible solution could be to simply define the struct as an array of unsigned ints with no bitfields and then do the copying/masking all manually. The solution will depend on how complex your struct type is and how much work you are willing to put into a better serialization technique.
Last edited on
Thank you for your suggestions--they were very insightful. Unfortunately, I am still turning up failure. The compiler I'm using is one written by Texas Instruments (Code Composer Studio) specifically for the DSP/BIOS. Unfortunately, they have overhauled this compiler in order to rewrite and/or strip a lot of C/C++ functionality out of it (for isntance, I don't even have access "standard" functionality such as std::cout or memcpy). This makes for an extremely lightweight executable, and it also makes for safer coding practices. However, it is extremely frustrating in this situation.

My assembly looks similar to yours, where the assembler is always anding with 0xffff no matter what I do, even when I assign 16-bit fields and explicitly cast it to an unsigned int.

On the bright side, your suggestion for a message struct was very useful, and I'm enjoying that simplicity that I hadn't thought of. I'll continue to try to find a workaround for this issue I'm having, and I'll be sure to post the answer as soon as I do find a solution.
My assembly looks similar to yours, where the assembler is always anding with 0xffff no matter what I do, even when I assign 16-bit fields and explicitly cast it to an unsigned int.


Actually that assembly that I showed with the andl 0xffff was actually the good code. The bad code was the block that showed the shl / sar. You want the compiler to simply move the bits and then clear the upper half. It is the shift left / algebraic shift right instructions that are the problem. That is what causes the sign bit to be filled into all of the upper bits. Good luck.
Topic archived. No new replies allowed.