[llvm-commits] Windows compilation warnings
Dimitry Andric
dimitry at andric.com
Fri Jul 16 04:26:19 PDT 2010
On 2010-07-16 04:00, Villmow, Micah wrote:
> This patch set makes Windows build cleanly for 32bit binaries from a
> 64bit machine.
Hmm, I regularly build 32-bit binaries on my 64-bit Windows machine,
but I never get many warnings about truncations, except the recently
introduced ones about enums being truncated. It is only when I build
the x64 project that I get a lot of those warnings.
Are you compiling with non-standard settings? E.g. -Wp64 (which I
remember is deprecated) or -W4, the maximum warning setting?
> Some thoughts about what I saw while doing this.
>
> size_t is not used in places where it should be, return value of
> size(), input value of resize().
Actually, in most cases, the class's "size_type", "difference_type" and
so on should be used. AFAIK there is no guarantee these are always
equivalent to size_t (although they usually are in practice).
But indeed, the cases where size_types are stuffed into plain ints or
unsigneds are many, although it may not be that easy to just change
whatever variable holds them to size_type. For example, you may need to
change function declarations to effectively do that, and such changes
tend to "cascade" through a lot of the source.
> getZExtValue()/getSExtValue() need 32bit explicit versions, this
> would remove probably 60% of truncation casts.
But you would still need to go over *all* the invocations, to replace
them... and be extremely careful not to break anything by those
replacements.
> uint64_t is overused in places where uint32_t is sufficient.
There may be reasons for those uses, but the only ones that can explain
are the authors of the code in question. :)
> unsigned/signed is used where uint32_t/int32_t should be used.
>
> Should there be something the developers notes about these? I think
> it is better to explicitly specify the integer size instead on
> relying on the compiler to pick the size.
I am not sure if hardcoding the number of bits in the type is always the
best solution in all cases. This really depends on the specific piece
of code.
Most of the times you just want to use plain int or unsigned, because
that will always be the machine's default word. You only have to pay
attention when mixing those with "size" types, such as object lengths
and pointer differences.
More information about the llvm-commits
mailing list