I can forsee one possible problem with your suggestion... I don't think the following would give the desired results:
(INT64)0x100000000
Compilers *do* support that, it's just not standard. GCC issues a warning about it, but it does encode it as a 64 bit "long long" constant. M$VC-- issues no warning, yet it does emit an "__int64" constant.
Perhaps in this matter you should just add to the makefile to tell GCC to ignore that particular warning? Then the only differences become typedefs and *printf (which isn't used by ReactOS anyway, being replaced with custom versions). No special wrapping macros all over the place, except for casts, I suppose, in some instances. These compilers assume (rightly so) that 0x100000000 is a 64-bit constant.
Really, I think all constant integers should be internally represented as the largest possible integer type. Then leave the compiler to bitch only when an implicit cast occurs on an out-of-range value. That's how it is now when working with chars, because constants are implied ints. The times when the difference matters are very small. Other than the meaning of the expression "0x80000000 >> 1", I can't think of any cases where the difference matters.
I personally would say C should standardize char=1 short=2 int=4 long=8 and maybe long long=16 and intptr=void *. Never going to happen though.
Melissa
$ gcc test.c -o test.exe test.c: In function `main': test.c:7: warning: integer constant is too large for "long" type $ ./test FACEC0DEDEADBEEF
C:\test>cl test.c Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 13.10.3077 for 80x86 Copyright (C) Microsoft Corporation 1984-2002. All rights reserved.
test.c Microsoft (R) Incremental Linker Version 7.10.3077 Copyright (C) Microsoft Corporation. All rights reserved.
/out:test.exe test.obj
C:\test>test FACEC0DEDEADBEEF