I know the standard is as follows:
- Integrals starting with 0 are interpreted as octal.
- Integrals starting with 0x or 0X are interpreted as hexadecimal.
The type of an integer literal depend on its value and notation:
- Decimals are by default signed and has the smallest type of int, long, long long in which the value fits.
- Hexadecimal and octal can be signed or unsigned and have the smallest type of int, unsigned int, long, unsigned long, long long, unsigned long long in which the literal value fits.
- No literals of type short but this can be override by a suffix.
But what about VC++?! It seems to be treating decimal, octal and hexadecimal the same and unsigned types are also allowed for decimals.
something like the following code:
cout << typeid(4294967295).name() << endl;
cout << typeid(4294967296).name() << endl;
cout << typeid(0xffffffff).name() << endl;
cout << typeid(0x100000000).name() << endl;
gives:
unsigned long
__int64
unsigned int
__int64
Is this expected and why it is different from the standard?