[LLVMdev] Clang question on x86-64 class type definitions

Panning, Benjamin J benjamin.j.panning at intel.com
Tue May 19 09:30:10 PDT 2015


Hi David,

Thank you for your response.  The –target I’m using is “x86_64-pc-win”, and in order to see the output, my test uses the following code below the class definitions:

     TestClass3 g_test3;
     TestClass1 *g_test1Ptr = &g_test3;

Without this code, the types end up being eliminated.

Unfortunately we have an optimization pass that crawls this type information and requires that base class types be present in derived class types.  The pass breaks down when the base class types are replaced with byte array types.

Would you happen to know if there is any way to disable the conversion of the base class type to byte array type?

Thanks,
Ben

From: David Majnemer [mailto:david.majnemer at gmail.com]
Sent: Sunday, May 17, 2015 7:31 PM
To: Panning, Benjamin J
Cc: llvmdev at cs.uiuc.edu
Subject: Re: [LLVMdev] Clang question on x86-64 class type definitions


On Fri, May 15, 2015 at 4:48 PM, Panning, Benjamin J <benjamin.j.panning at intel.com<mailto:benjamin.j.panning at intel.com>> wrote:
Hi All,

I have a question on the type definitions that Clang generates when compiling for x86-64.  Here is the C++ code that I compile:

                class TestClass1 {
                                int X;
                public:
                                virtual int Foo() { return 1; }
                };

                class TestClass2 : public TestClass1 {
                                int X;
                public:
                                virtual int Foo() { return 1; }
                };

                class TestClass3 : public TestClass2 {
                                int X;
                public:
                                virtual int Foo() { return 1; }
                };

Here is the command lines that I am using, for 32-bit and 64-bit mode respectively:

                clang.exe -S -emit-llvm -O0 test.cpp -o test.ll
                clang.exe -S -emit-llvm -O0 test.cpp -o test.ll -march=x86-64

The 32-bit compile produces this, which I understand:

                %class.TestClass3 = type { %class.TestClass2, i32 }
                %class.TestClass2 = type { %class.TestClass1, i32 }
                %class.TestClass1 = type { i32 (...)**, i32 }

The 64-bit compile produces this:

                %class.TestClass3 = type { %class.TestClass2, i32, [4 x i8] }
                %class.TestClass2 = type { [12 x i8], i32 }
                %class.TestClass1 = type { i32 (...)**, i32 }

A few basic questions: (1) Why does TestClass2 have a [12 x i8] array instead of a reference to TestClass1?

What triple are you using? I cannot reproduce your %class.TestClass2 type.

(2) What is the [4 x i8] array at the end of TestClass3?

The record is eight byte aligned because of the vptr at offset 0. Those 32-bytes are for padding to align the record's size to a multiple of the alignment (8 bytes in this example).

(3) Is it safe to use a bitcast instead of a getelementptr constant expression for casting a TestClass3* to a TestClass1* in this case (I see no other way of doing it…)?

Yes, it is perfectly safe to use bitcast instead of gep here.


I’m not very knowledgeable about Clang, and I greatly appreciate your help.

Thanks,
Ben




_______________________________________________
LLVM Developers mailing list
LLVMdev at cs.uiuc.edu<mailto:LLVMdev at cs.uiuc.edu>         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20150519/1611cdf8/attachment.html>


More information about the llvm-dev mailing list