[llvm-commits] [dragonegg] r127408 - in /dragonegg/trunk: ./ darwin/ freebsd/ linux/ unknown/ utils/ x86/
Duncan Sands
baldrick at free.fr
Thu Mar 10 08:05:54 PST 2011
Author: baldrick
Date: Thu Mar 10 10:05:54 2011
New Revision: 127408
URL: http://llvm.org/viewvc/llvm-project?rev=127408&view=rev
Log:
Now that llvm-gcc is dead (if not buried) there is no point in using the
same file names as it does. Change to a naming scheme that I like better.
Added:
dragonegg/trunk/ABI.h
- copied, changed from r127403, dragonegg/trunk/llvm-abi.h
dragonegg/trunk/Backend.cpp
- copied, changed from r127406, dragonegg/trunk/llvm-backend.cpp
dragonegg/trunk/Constants.cpp
- copied, changed from r127406, dragonegg/trunk/llvm-constant.cpp
dragonegg/trunk/Constants.h
- copied, changed from r127406, dragonegg/trunk/llvm-constant.h
dragonegg/trunk/Convert.cpp
- copied, changed from r127406, dragonegg/trunk/llvm-convert.cpp
dragonegg/trunk/Debug.cpp
- copied, changed from r127403, dragonegg/trunk/llvm-debug.cpp
dragonegg/trunk/Debug.h
- copied, changed from r127403, dragonegg/trunk/llvm-debug.h
dragonegg/trunk/DefaultABI.cpp
- copied, changed from r127403, dragonegg/trunk/llvm-abi-default.cpp
dragonegg/trunk/Internals.h
- copied, changed from r127406, dragonegg/trunk/llvm-internal.h
dragonegg/trunk/Trees.cpp
- copied, changed from r127406, dragonegg/trunk/llvm-tree.cpp
dragonegg/trunk/Trees.h
- copied, changed from r127406, dragonegg/trunk/llvm-tree.h
dragonegg/trunk/Types.cpp
- copied, changed from r127403, dragonegg/trunk/llvm-types.cpp
dragonegg/trunk/cache.c
- copied, changed from r127224, dragonegg/trunk/llvm-cache.c
dragonegg/trunk/cache.h
- copied, changed from r127403, dragonegg/trunk/llvm-cache.h
dragonegg/trunk/darwin/OS.h
- copied, changed from r127403, dragonegg/trunk/darwin/llvm-os.h
dragonegg/trunk/freebsd/OS.h
- copied, changed from r127403, dragonegg/trunk/freebsd/llvm-os.h
dragonegg/trunk/gt-cache.h
- copied unchanged from r127224, dragonegg/trunk/gt-llvm-cache.h
dragonegg/trunk/linux/OS.h
- copied, changed from r127403, dragonegg/trunk/linux/llvm-os.h
dragonegg/trunk/unknown/OS.h
- copied unchanged from r127224, dragonegg/trunk/unknown/llvm-os.h
dragonegg/trunk/unknown/Target.cpp
- copied unchanged from r127224, dragonegg/trunk/unknown/llvm-target.cpp
dragonegg/trunk/utils/TargetInfo.cpp
- copied, changed from r127403, dragonegg/trunk/utils/target.cpp
dragonegg/trunk/x86/Target.cpp
- copied, changed from r127403, dragonegg/trunk/x86/llvm-target.cpp
dragonegg/trunk/x86/Target.h
- copied, changed from r127403, dragonegg/trunk/x86/llvm-target.h
Removed:
dragonegg/trunk/darwin/llvm-os.h
dragonegg/trunk/freebsd/llvm-os.h
dragonegg/trunk/gt-llvm-cache.h
dragonegg/trunk/linux/llvm-os.h
dragonegg/trunk/llvm-abi-default.cpp
dragonegg/trunk/llvm-abi.h
dragonegg/trunk/llvm-backend.cpp
dragonegg/trunk/llvm-cache.c
dragonegg/trunk/llvm-cache.h
dragonegg/trunk/llvm-constant.cpp
dragonegg/trunk/llvm-constant.h
dragonegg/trunk/llvm-convert.cpp
dragonegg/trunk/llvm-debug.cpp
dragonegg/trunk/llvm-debug.h
dragonegg/trunk/llvm-internal.h
dragonegg/trunk/llvm-tree.cpp
dragonegg/trunk/llvm-tree.h
dragonegg/trunk/llvm-types.cpp
dragonegg/trunk/unknown/llvm-os.h
dragonegg/trunk/unknown/llvm-target.cpp
dragonegg/trunk/utils/target.cpp
dragonegg/trunk/x86/llvm-target.cpp
dragonegg/trunk/x86/llvm-target.h
Modified:
dragonegg/trunk/Makefile
Copied: dragonegg/trunk/ABI.h (from r127403, dragonegg/trunk/llvm-abi.h)
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/ABI.h?p2=dragonegg/trunk/ABI.h&p1=dragonegg/trunk/llvm-abi.h&r1=127403&r2=127408&rev=127408&view=diff
==============================================================================
--- dragonegg/trunk/llvm-abi.h (original)
+++ dragonegg/trunk/ABI.h Thu Mar 10 10:05:54 2011
@@ -1,4 +1,4 @@
-//===------ llvm-abi.h - Processor ABI customization hooks ------*- C++ -*-===//
+//===--------- ABI.h - Processor ABI customization hooks --------*- C++ -*-===//
//
// Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 Chris Lattner,
// Duncan Sands et al.
@@ -23,12 +23,12 @@
// structures are passed by-value.
//===----------------------------------------------------------------------===//
-#ifndef LLVM_ABI_H
-#define LLVM_ABI_H
+#ifndef DRAGONEGG_ABI_H
+#define DRAGONEGG_ABI_H
// Plugin headers
-#include "llvm-internal.h"
-#include "llvm-target.h"
+#include "Internals.h"
+#include "Target.h"
// LLVM headers
#include "llvm/LLVMContext.h"
@@ -348,4 +348,4 @@
std::vector<const Type*> &ScalarElts);
};
-#endif /* LLVM_ABI_H */
+#endif /* DRAGONEGG_ABI_H */
Copied: dragonegg/trunk/Backend.cpp (from r127406, dragonegg/trunk/llvm-backend.cpp)
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/Backend.cpp?p2=dragonegg/trunk/Backend.cpp&p1=dragonegg/trunk/llvm-backend.cpp&r1=127406&r2=127408&rev=127408&view=diff
==============================================================================
--- dragonegg/trunk/llvm-backend.cpp (original)
+++ dragonegg/trunk/Backend.cpp Thu Mar 10 10:05:54 2011
@@ -1,4 +1,4 @@
-//===-------- llvm-backend.cpp - High-level LLVM backend interface --------===//
+//===----------- Backend.cpp - High-level LLVM backend interface ----------===//
//
// Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 Chris Lattner,
// Duncan Sands et al.
@@ -23,12 +23,12 @@
// Plugin headers
extern "C" {
-#include "llvm-cache.h"
+#include "cache.h"
}
-#include "llvm-constant.h"
-#include "llvm-debug.h"
-#include "llvm-os.h"
-#include "llvm-target.h"
+#include "Constants.h"
+#include "Debug.h"
+#include "OS.h"
+#include "Target.h"
// LLVM headers
#define DEBUG_TYPE "plugin"
Copied: dragonegg/trunk/Constants.cpp (from r127406, dragonegg/trunk/llvm-constant.cpp)
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/Constants.cpp?p2=dragonegg/trunk/Constants.cpp&p1=dragonegg/trunk/llvm-constant.cpp&r1=127406&r2=127408&rev=127408&view=diff
==============================================================================
--- dragonegg/trunk/llvm-constant.cpp (original)
+++ dragonegg/trunk/Constants.cpp Thu Mar 10 10:05:54 2011
@@ -1,4 +1,4 @@
-//===----- llvm-constant.cpp - Converting and working with constants ------===//
+//===------- Constants.cpp - Converting and working with constants --------===//
//
// Copyright (C) 2011 Duncan Sands
//
@@ -21,9 +21,9 @@
//===----------------------------------------------------------------------===//
// Plugin headers
-#include "llvm-constant.h"
-#include "llvm-internal.h"
-#include "llvm-tree.h"
+#include "Constants.h"
+#include "Internals.h"
+#include "Trees.h"
// LLVM headers
#include "llvm/GlobalVariable.h"
Copied: dragonegg/trunk/Constants.h (from r127406, dragonegg/trunk/llvm-constant.h)
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/Constants.h?p2=dragonegg/trunk/Constants.h&p1=dragonegg/trunk/llvm-constant.h&r1=127406&r2=127408&rev=127408&view=diff
==============================================================================
--- dragonegg/trunk/llvm-constant.h (original)
+++ dragonegg/trunk/Constants.h Thu Mar 10 10:05:54 2011
@@ -1,4 +1,4 @@
-//=--- llvm-constant.h - Converting and working with constants --*- C++ -*---=//
+//=----- Constants.h - Converting and working with constants --*- C++ -*-----=//
//
// Copyright (C) 2011 Duncan Sands.
//
@@ -21,8 +21,8 @@
// with them.
//===----------------------------------------------------------------------===//
-#ifndef DRAGONEGG_CONSTANT_H
-#define DRAGONEGG_CONSTANT_H
+#ifndef DRAGONEGG_CONSTANTS_H
+#define DRAGONEGG_CONSTANTS_H
union tree_node;
@@ -36,4 +36,4 @@
// Constant Expression l-values.
extern llvm::Constant *EmitAddressOf(tree_node *exp);
-#endif /* DRAGONEGG_CONSTANT_H */
+#endif /* DRAGONEGG_CONSTANTS_H */
Copied: dragonegg/trunk/Convert.cpp (from r127406, dragonegg/trunk/llvm-convert.cpp)
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/Convert.cpp?p2=dragonegg/trunk/Convert.cpp&p1=dragonegg/trunk/llvm-convert.cpp&r1=127406&r2=127408&rev=127408&view=diff
==============================================================================
--- dragonegg/trunk/llvm-convert.cpp (original)
+++ dragonegg/trunk/Convert.cpp Thu Mar 10 10:05:54 2011
@@ -1,4 +1,4 @@
-//===---------- llvm-convert.cpp - Converting gimple to LLVM IR -----------===//
+//===------------- Convert.cpp - Converting gimple to LLVM IR -------------===//
//
// Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 Chris Lattner,
// Duncan Sands et al.
@@ -22,10 +22,10 @@
//===----------------------------------------------------------------------===//
// Plugin headers
-#include "llvm-abi.h"
-#include "llvm-constant.h"
-#include "llvm-debug.h"
-#include "llvm-tree.h"
+#include "ABI.h"
+#include "Constants.h"
+#include "Debug.h"
+#include "Trees.h"
// LLVM headers
#include "llvm/Module.h"
Copied: dragonegg/trunk/Debug.cpp (from r127403, dragonegg/trunk/llvm-debug.cpp)
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/Debug.cpp?p2=dragonegg/trunk/Debug.cpp&p1=dragonegg/trunk/llvm-debug.cpp&r1=127403&r2=127408&rev=127408&view=diff
==============================================================================
--- dragonegg/trunk/llvm-debug.cpp (original)
+++ dragonegg/trunk/Debug.cpp Thu Mar 10 10:05:54 2011
@@ -1,4 +1,4 @@
-//===------------ llvm-debug.cpp - Debug information gathering ------------===//
+//===-------------- Debug.cpp - Debug information gathering ---------------===//
//
// Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 Jim Laskey,
// Duncan Sands et al.
@@ -22,7 +22,7 @@
//===----------------------------------------------------------------------===//
// Plugin headers
-#include "llvm-debug.h"
+#include "Debug.h"
// LLVM headers
#include "llvm/Module.h"
Copied: dragonegg/trunk/Debug.h (from r127403, dragonegg/trunk/llvm-debug.h)
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/Debug.h?p2=dragonegg/trunk/Debug.h&p1=dragonegg/trunk/llvm-debug.h&r1=127403&r2=127408&rev=127408&view=diff
==============================================================================
--- dragonegg/trunk/llvm-debug.h (original)
+++ dragonegg/trunk/Debug.h Thu Mar 10 10:05:54 2011
@@ -1,4 +1,4 @@
-//===---- llvm-debug.h - Interface for generating debug info ----*- C++ -*-===//
+//===------ Debug.h - Interface for generating debug info ----*- C++ -*----===//
//
// Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011 Jim Laskey, Duncan Sands
// et al.
@@ -21,11 +21,11 @@
// This file declares the debug interfaces shared among the dragonegg files.
//===----------------------------------------------------------------------===//
-#ifndef LLVM_DEBUG_H
-#define LLVM_DEBUG_H
+#ifndef DRAGONEGG_DEBUG_H
+#define DRAGONEGG_DEBUG_H
// Plugin headers
-#include "llvm-internal.h"
+#include "Internals.h"
// LLVM headers
#include "llvm/Analysis/DebugInfo.h"
@@ -364,4 +364,4 @@
} // end namespace llvm
-#endif /* LLVM_DEBUG_H */
+#endif /* DRAGONEGG_DEBUG_H */
Copied: dragonegg/trunk/DefaultABI.cpp (from r127403, dragonegg/trunk/llvm-abi-default.cpp)
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/DefaultABI.cpp?p2=dragonegg/trunk/DefaultABI.cpp&p1=dragonegg/trunk/llvm-abi-default.cpp&r1=127403&r2=127408&rev=127408&view=diff
==============================================================================
--- dragonegg/trunk/llvm-abi-default.cpp (original)
+++ dragonegg/trunk/DefaultABI.cpp Thu Mar 10 10:05:54 2011
@@ -1,4 +1,4 @@
-//===--------- llvm-abi-default.cpp - Default ABI implementation ----------===//
+//===------------ DefaultABI.cpp - Default ABI implementation -------------===//
//
// Copyright (C) 2010, 2011 Rafael Espindola, Duncan Sands et al.
//
@@ -21,7 +21,7 @@
//===----------------------------------------------------------------------===//
// Plugin headers
-#include "llvm-abi.h"
+#include "ABI.h"
// System headers
#include <gmp.h>
Copied: dragonegg/trunk/Internals.h (from r127406, dragonegg/trunk/llvm-internal.h)
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/Internals.h?p2=dragonegg/trunk/Internals.h&p1=dragonegg/trunk/llvm-internal.h&r1=127406&r2=127408&rev=127408&view=diff
==============================================================================
--- dragonegg/trunk/llvm-internal.h (original)
+++ dragonegg/trunk/Internals.h Thu Mar 10 10:05:54 2011
@@ -1,4 +1,4 @@
-//=-- llvm-internal.h - Interface between the backend components --*- C++ -*-=//
+//=---- Internals.h - Interface between the backend components --*- C++ -*---=//
//
// Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 Chris Lattner,
// Duncan Sands et al.
@@ -21,8 +21,8 @@
// This file declares the internal interfaces shared among the dragonegg files.
//===----------------------------------------------------------------------===//
-#ifndef LLVM_INTERNAL_H
-#define LLVM_INTERNAL_H
+#ifndef DRAGONEGG_INTERNALS_H
+#define DRAGONEGG_INTERNALS_H
// LLVM headers
#include "llvm/Intrinsics.h"
@@ -857,4 +857,4 @@
Constant *EmitLV_LABEL_DECL(tree_node *exp);
};
-#endif /* LLVM_INTERNAL_H */
+#endif /* DRAGONEGG_INTERNALS_H */
Modified: dragonegg/trunk/Makefile
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/Makefile?rev=127408&r1=127407&r2=127408&view=diff
==============================================================================
--- dragonegg/trunk/Makefile (original)
+++ dragonegg/trunk/Makefile Thu Mar 10 10:05:54 2011
@@ -37,15 +37,14 @@
REVISION:=$(shell svnversion -n $(SRC_DIR))
PLUGIN=dragonegg.so
-PLUGIN_OBJECTS=llvm-abi-default.o llvm-backend.o llvm-cache.o llvm-constant.o \
- llvm-convert.o llvm-debug.o llvm-tree.o llvm-types.o \
- bits_and_bobs.o
+PLUGIN_OBJECTS=cache.o Backend.o Constants.o Convert.o Debug.o DefaultABI.o \
+ Trees.o Types.o bits_and_bobs.o
-TARGET_OBJECT=llvm-target.o
-TARGET_SOURCE=$(SRC_DIR)/$(shell $(TARGET_UTIL) -p)/llvm-target.cpp
+TARGET_OBJECT=Target.o
+TARGET_SOURCE=$(SRC_DIR)/$(shell $(TARGET_UTIL) -p)/Target.cpp
-TARGET_UTIL_OBJECTS=target.o
-TARGET_UTIL=./target
+TARGET_UTIL_OBJECTS=TargetInfo.o
+TARGET_UTIL=./TargetInfo
ALL_OBJECTS=$(PLUGIN_OBJECTS) $(TARGET_OBJECT) $(TARGET_UTIL_OBJECTS)
@@ -84,7 +83,7 @@
$(QUIET)$(CXX) -c $(CPP_OPTIONS) $(TARGET_HEADERS) $(CXXFLAGS) $<
$(TARGET_OBJECT): $(TARGET_UTIL)
- @echo Compiling $(shell $(TARGET_UTIL) -p)/llvm-target.cpp
+ @echo Compiling $(shell $(TARGET_UTIL) -p)/Target.cpp
$(QUIET)$(CXX) -o $@ -c $(CPP_OPTIONS) $(TARGET_HEADERS) $(CXXFLAGS) \
$(TARGET_SOURCE)
@@ -103,11 +102,11 @@
# The following target exists for the benefit of the dragonegg maintainers, and
# is not used in a normal build.
-GENGTYPE_INPUT=$(SRC_DIR)/llvm-cache.c
-GENGTYPE_OUTPUT=$(SRC_DIR)/gt-llvm-cache.h
-gt-llvm-cache.h::
+GENGTYPE_INPUT=$(SRC_DIR)/cache.c
+GENGTYPE_OUTPUT=$(SRC_DIR)/gt-cache.h
+gt-cache.h::
cd $(HOME)/GCC/objects/gcc && ./build/gengtype \
-P $(GENGTYPE_OUTPUT) $(GCC_PLUGIN_DIR) gtyp-input.list \
$(GENGTYPE_INPUT)
- sed -i "s/ggc_cache_tab .*\[\]/ggc_cache_tab gt_ggc_rc__gt_llvm_cache_h[]/" $(GENGTYPE_OUTPUT)
- sed -i "s/ggc_root_tab .*\[\]/ggc_root_tab gt_pch_rc__gt_llvm_cache_h[]/" $(GENGTYPE_OUTPUT)
+ sed -i "s/ggc_cache_tab .*\[\]/ggc_cache_tab gt_ggc_rc__gt_cache_h[]/" $(GENGTYPE_OUTPUT)
+ sed -i "s/ggc_root_tab .*\[\]/ggc_root_tab gt_pch_rc__gt_cache_h[]/" $(GENGTYPE_OUTPUT)
Copied: dragonegg/trunk/Trees.cpp (from r127406, dragonegg/trunk/llvm-tree.cpp)
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/Trees.cpp?p2=dragonegg/trunk/Trees.cpp&p1=dragonegg/trunk/llvm-tree.cpp&r1=127406&r2=127408&rev=127408&view=diff
==============================================================================
--- dragonegg/trunk/llvm-tree.cpp (original)
+++ dragonegg/trunk/Trees.cpp Thu Mar 10 10:05:54 2011
@@ -1,4 +1,4 @@
-//===---- llvm-tree.cpp - Utility functions for working with GCC trees ----===//
+//===------ Trees.cpp - Utility functions for working with GCC trees ------===//
//
// Copyright (C) 2010, 2011 Duncan Sands.
//
@@ -21,7 +21,7 @@
//===----------------------------------------------------------------------===//
// Plugin headers
-#include "llvm-tree.h"
+#include "Trees.h"
// LLVM headers
#include "llvm/ADT/Twine.h"
Copied: dragonegg/trunk/Trees.h (from r127406, dragonegg/trunk/llvm-tree.h)
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/Trees.h?p2=dragonegg/trunk/Trees.h&p1=dragonegg/trunk/llvm-tree.h&r1=127406&r2=127408&rev=127408&view=diff
==============================================================================
--- dragonegg/trunk/llvm-tree.h (original)
+++ dragonegg/trunk/Trees.h Thu Mar 10 10:05:54 2011
@@ -1,4 +1,4 @@
-//=-- llvm-tree.h - Utility functions for working with GCC trees --*- C++ -*-=//
+//=---- Trees.h - Utility functions for working with GCC trees --*- C++ -*---=//
//
// Copyright (C) 2010, 2011 Duncan Sands.
//
@@ -20,8 +20,8 @@
// This file declares utility functions for working with GCC trees.
//===----------------------------------------------------------------------===//
-#ifndef LLVM_TREE_H
-#define LLVM_TREE_H
+#ifndef DRAGONEGG_TREES_H
+#define DRAGONEGG_TREES_H
// System headers
#include <string>
@@ -41,4 +41,4 @@
/// in undefined behaviour.
bool hasNSW(tree_node *type);
-#endif /* LLVM_TREE_H */
+#endif /* DRAGONEGG_TREES_H */
Copied: dragonegg/trunk/Types.cpp (from r127403, dragonegg/trunk/llvm-types.cpp)
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/Types.cpp?p2=dragonegg/trunk/Types.cpp&p1=dragonegg/trunk/llvm-types.cpp&r1=127403&r2=127408&rev=127408&view=diff
==============================================================================
--- dragonegg/trunk/llvm-types.cpp (original)
+++ dragonegg/trunk/Types.cpp Thu Mar 10 10:05:54 2011
@@ -1,4 +1,4 @@
-//===--------- llvm-type.cpp - Converting GCC types to LLVM types ---------===//
+//===----------- Types.cpp - Converting GCC types to LLVM types -----------===//
//
// Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 Chris Lattner,
// Duncan Sands et al.
@@ -22,10 +22,10 @@
//===----------------------------------------------------------------------===//
// Plugin headers
-#include "llvm-abi.h"
-#include "llvm-tree.h"
+#include "ABI.h"
+#include "Trees.h"
extern "C" {
-#include "llvm-cache.h"
+#include "cache.h"
}
// LLVM headers
Copied: dragonegg/trunk/cache.c (from r127224, dragonegg/trunk/llvm-cache.c)
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/cache.c?p2=dragonegg/trunk/cache.c&p1=dragonegg/trunk/llvm-cache.c&r1=127224&r2=127408&rev=127408&view=diff
==============================================================================
--- dragonegg/trunk/llvm-cache.c (original)
+++ dragonegg/trunk/cache.c Thu Mar 10 10:05:54 2011
@@ -25,7 +25,7 @@
===----------------------------------------------------------------------===*/
/* Plugin headers. */
-#include "llvm-cache.h"
+#include "cache.h"
/* GCC headers. */
#include "config.h"
@@ -49,7 +49,7 @@
htab_t llvm_cache;
/* Garbage collector header. */
-#include "gt-llvm-cache.h"
+#include "gt-cache.h"
/* llvm_has_cached - Returns whether a value has been associated with the
tree. */
Copied: dragonegg/trunk/cache.h (from r127403, dragonegg/trunk/llvm-cache.h)
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/cache.h?p2=dragonegg/trunk/cache.h&p1=dragonegg/trunk/llvm-cache.h&r1=127403&r2=127408&rev=127408&view=diff
==============================================================================
--- dragonegg/trunk/llvm-cache.h (original)
+++ dragonegg/trunk/cache.h Thu Mar 10 10:05:54 2011
@@ -1,4 +1,4 @@
-/*===-------- llvm-cache.h - Caching values "in" GCC trees --------*- C -*-===*\
+/*===----------- cache.h - Caching values "in" GCC trees ----------*- C -*-===*\
|* *|
|* Copyright (C) 2009, 2010, 2011 Duncan Sands. *|
|* *|
@@ -23,8 +23,8 @@
|* the cached value will have been cleared. *|
\*===----------------------------------------------------------------------===*/
-#ifndef LLVM_CACHE_H
-#define LLVM_CACHE_H
+#ifndef DRAGONEGG_CACHE_H
+#define DRAGONEGG_CACHE_H
union tree_node;
@@ -42,4 +42,4 @@
/* llvm_replace_cached - Replaces all occurrences of old_val with new_val. */
extern void llvm_replace_cached(const void *old_val, const void *new_val);
-#endif /* LLVM_CACHE_H */
+#endif /* DRAGONEGG_CACHE_H */
Copied: dragonegg/trunk/darwin/OS.h (from r127403, dragonegg/trunk/darwin/llvm-os.h)
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/darwin/OS.h?p2=dragonegg/trunk/darwin/OS.h&p1=dragonegg/trunk/darwin/llvm-os.h&r1=127403&r2=127408&rev=127408&view=diff
==============================================================================
--- dragonegg/trunk/darwin/llvm-os.h (original)
+++ dragonegg/trunk/darwin/OS.h Thu Mar 10 10:05:54 2011
@@ -1,4 +1,4 @@
-//===--------- llvm-os.h - Darwin specific definitions ----------*- C++ -*-===//
+//===------------ OS.h - Darwin specific definitions ------------*- C++ -*-===//
//
// Copyright (C) 2009, 2010, 2011 Duncan Sands et al.
//
@@ -20,8 +20,8 @@
// This file provides Darwin specific declarations.
//===----------------------------------------------------------------------===//
-#ifndef LLVM_OS_H
-#define LLVM_OS_H
+#ifndef DRAGONEGG_OS_H
+#define DRAGONEGG_OS_H
/* Darwin X86-64 only supports PIC code generation. */
#if defined (TARGET_386)
@@ -53,4 +53,4 @@
} \
} while (0)
-#endif /* LLVM_OS_H */
+#endif /* DRAGONEGG_OS_H */
Removed: dragonegg/trunk/darwin/llvm-os.h
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/darwin/llvm-os.h?rev=127407&view=auto
==============================================================================
--- dragonegg/trunk/darwin/llvm-os.h (original)
+++ dragonegg/trunk/darwin/llvm-os.h (removed)
@@ -1,56 +0,0 @@
-//===--------- llvm-os.h - Darwin specific definitions ----------*- C++ -*-===//
-//
-// Copyright (C) 2009, 2010, 2011 Duncan Sands et al.
-//
-// This file is part of DragonEgg.
-//
-// DragonEgg is free software; you can redistribute it and/or modify it under
-// the terms of the GNU General Public License as published by the Free Software
-// Foundation; either version 2, or (at your option) any later version.
-//
-// DragonEgg is distributed in the hope that it will be useful, but WITHOUT ANY
-// WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
-// A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-//
-// You should have received a copy of the GNU General Public License along with
-// DragonEgg; see the file COPYING. If not, write to the Free Software
-// Foundation, 51 Franklin Street, Suite 500, Boston, MA 02110-1335, USA.
-//
-//===----------------------------------------------------------------------===//
-// This file provides Darwin specific declarations.
-//===----------------------------------------------------------------------===//
-
-#ifndef LLVM_OS_H
-#define LLVM_OS_H
-
-/* Darwin X86-64 only supports PIC code generation. */
-#if defined (TARGET_386)
-#define LLVM_SET_TARGET_OPTIONS(argvec) \
- if ((TARGET_64BIT) || flag_pic) \
- argvec.push_back ("--relocation-model=pic"); \
- else if (!MACHO_DYNAMIC_NO_PIC_P) \
- argvec.push_back ("--relocation-model=static")
-#elif defined (TARGET_ARM)
-#define LLVM_SET_TARGET_OPTIONS(argvec) \
- if (flag_pic) \
- argvec.push_back ("--relocation-model=pic"); \
- else if (!MACHO_DYNAMIC_NO_PIC_P) \
- argvec.push_back ("--relocation-model=static"); \
-#else /* !TARGET_386 && !TARGET_ARM */
-#define LLVM_SET_TARGET_OPTIONS(argvec) \
- if (flag_pic) \
- argvec.push_back ("--relocation-model=pic"); \
- else if (!MACHO_DYNAMIC_NO_PIC_P) \
- argvec.push_back ("--relocation-model=static")
-#endif /* !TARGET_386 && !TARGET_ARM */
-
-/* Give a constant string a sufficient alignment for the platform. */
-/* radar 7291825 */
-#define TARGET_ADJUST_CSTRING_ALIGN(GV) \
- do { \
- if (GV->hasInternalLinkage()) { \
- GV->setAlignment(TARGET_64BIT ? 8 : 4); \
- } \
- } while (0)
-
-#endif /* LLVM_OS_H */
Copied: dragonegg/trunk/freebsd/OS.h (from r127403, dragonegg/trunk/freebsd/llvm-os.h)
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/freebsd/OS.h?p2=dragonegg/trunk/freebsd/OS.h&p1=dragonegg/trunk/freebsd/llvm-os.h&r1=127403&r2=127408&rev=127408&view=diff
==============================================================================
--- dragonegg/trunk/freebsd/llvm-os.h (original)
+++ dragonegg/trunk/freebsd/OS.h Thu Mar 10 10:05:54 2011
@@ -1,4 +1,4 @@
-//===--------- llvm-os.h - FreeBSD specific definitions ---------*- C++ -*-===//
+//===------------ OS.h - FreeBSD specific definitions -----------*- C++ -*-===//
//
// Copyright (C) 2009, 2010, 2011 Duncan Sands et al.
//
@@ -20,7 +20,7 @@
// This file provides FreeBSD specific declarations.
//===----------------------------------------------------------------------===//
-#ifndef LLVM_OS_H
-#define LLVM_OS_H
+#ifndef DRAGONEGG_OS_H
+#define DRAGONEGG_OS_H
-#endif /* LLVM_OS_H */
+#endif /* DRAGONEGG_OS_H */
Removed: dragonegg/trunk/freebsd/llvm-os.h
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/freebsd/llvm-os.h?rev=127407&view=auto
==============================================================================
--- dragonegg/trunk/freebsd/llvm-os.h (original)
+++ dragonegg/trunk/freebsd/llvm-os.h (removed)
@@ -1,26 +0,0 @@
-//===--------- llvm-os.h - FreeBSD specific definitions ---------*- C++ -*-===//
-//
-// Copyright (C) 2009, 2010, 2011 Duncan Sands et al.
-//
-// This file is part of DragonEgg.
-//
-// DragonEgg is free software; you can redistribute it and/or modify it under
-// the terms of the GNU General Public License as published by the Free Software
-// Foundation; either version 2, or (at your option) any later version.
-//
-// DragonEgg is distributed in the hope that it will be useful, but WITHOUT ANY
-// WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
-// A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-//
-// You should have received a copy of the GNU General Public License along with
-// DragonEgg; see the file COPYING. If not, write to the Free Software
-// Foundation, 51 Franklin Street, Suite 500, Boston, MA 02110-1335, USA.
-//
-//===----------------------------------------------------------------------===//
-// This file provides FreeBSD specific declarations.
-//===----------------------------------------------------------------------===//
-
-#ifndef LLVM_OS_H
-#define LLVM_OS_H
-
-#endif /* LLVM_OS_H */
Removed: dragonegg/trunk/gt-llvm-cache.h
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/gt-llvm-cache.h?rev=127407&view=auto
==============================================================================
--- dragonegg/trunk/gt-llvm-cache.h (original)
+++ dragonegg/trunk/gt-llvm-cache.h (removed)
@@ -1,802 +0,0 @@
-/* Type information for GCC.
- Copyright (C) 2004, 2007, 2009 Free Software Foundation, Inc.
-
-This file is part of GCC.
-
-GCC is free software; you can redistribute it and/or modify it under
-the terms of the GNU General Public License as published by the Free
-Software Foundation; either version 3, or (at your option) any later
-version.
-
-GCC is distributed in the hope that it will be useful, but WITHOUT ANY
-WARRANTY; without even the implied warranty of MERCHANTABILITY or
-FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
-for more details.
-
-You should have received a copy of the GNU General Public License
-along with GCC; see the file COPYING3. If not see
-<http://www.gnu.org/licenses/>. */
-
-/* This file is machine generated. Do not edit. */
-
-/* GC marker procedures. */
-/* macros and declarations */
-#define gt_ggc_m_13tree_llvm_map(X) do { \
- if (X != NULL) gt_ggc_mx_tree_llvm_map (X);\
- } while (0)
-extern void gt_ggc_mx_tree_llvm_map (void *);
-#define gt_ggc_m_15interface_tuple(X) do { \
- if (X != NULL) gt_ggc_mx_interface_tuple (X);\
- } while (0)
-extern void gt_ggc_mx_interface_tuple (void *);
-#define gt_ggc_m_16volatilized_type(X) do { \
- if (X != NULL) gt_ggc_mx_volatilized_type (X);\
- } while (0)
-extern void gt_ggc_mx_volatilized_type (void *);
-#define gt_ggc_m_17string_descriptor(X) do { \
- if (X != NULL) gt_ggc_mx_string_descriptor (X);\
- } while (0)
-extern void gt_ggc_mx_string_descriptor (void *);
-#define gt_ggc_m_15c_inline_static(X) do { \
- if (X != NULL) gt_ggc_mx_c_inline_static (X);\
- } while (0)
-extern void gt_ggc_mx_c_inline_static (void *);
-#define gt_ggc_m_24VEC_c_goto_bindings_p_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_c_goto_bindings_p_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_c_goto_bindings_p_gc (void *);
-#define gt_ggc_m_15c_goto_bindings(X) do { \
- if (X != NULL) gt_ggc_mx_c_goto_bindings (X);\
- } while (0)
-extern void gt_ggc_mx_c_goto_bindings (void *);
-#define gt_ggc_m_7c_scope(X) do { \
- if (X != NULL) gt_ggc_mx_c_scope (X);\
- } while (0)
-extern void gt_ggc_mx_c_scope (void *);
-#define gt_ggc_m_9c_binding(X) do { \
- if (X != NULL) gt_ggc_mx_c_binding (X);\
- } while (0)
-extern void gt_ggc_mx_c_binding (void *);
-#define gt_ggc_m_12c_label_vars(X) do { \
- if (X != NULL) gt_ggc_mx_c_label_vars (X);\
- } while (0)
-extern void gt_ggc_mx_c_label_vars (void *);
-#define gt_ggc_m_8c_parser(X) do { \
- if (X != NULL) gt_ggc_mx_c_parser (X);\
- } while (0)
-extern void gt_ggc_mx_c_parser (void *);
-#define gt_ggc_m_9imp_entry(X) do { \
- if (X != NULL) gt_ggc_mx_imp_entry (X);\
- } while (0)
-extern void gt_ggc_mx_imp_entry (void *);
-#define gt_ggc_m_16hashed_attribute(X) do { \
- if (X != NULL) gt_ggc_mx_hashed_attribute (X);\
- } while (0)
-extern void gt_ggc_mx_hashed_attribute (void *);
-#define gt_ggc_m_12hashed_entry(X) do { \
- if (X != NULL) gt_ggc_mx_hashed_entry (X);\
- } while (0)
-extern void gt_ggc_mx_hashed_entry (void *);
-#define gt_ggc_m_14type_assertion(X) do { \
- if (X != NULL) gt_ggc_mx_type_assertion (X);\
- } while (0)
-extern void gt_ggc_mx_type_assertion (void *);
-#define gt_ggc_m_18treetreehash_entry(X) do { \
- if (X != NULL) gt_ggc_mx_treetreehash_entry (X);\
- } while (0)
-extern void gt_ggc_mx_treetreehash_entry (void *);
-#define gt_ggc_m_5CPool(X) do { \
- if (X != NULL) gt_ggc_mx_CPool (X);\
- } while (0)
-extern void gt_ggc_mx_CPool (void *);
-#define gt_ggc_m_3JCF(X) do { \
- if (X != NULL) gt_ggc_mx_JCF (X);\
- } while (0)
-extern void gt_ggc_mx_JCF (void *);
-#define gt_ggc_m_17module_htab_entry(X) do { \
- if (X != NULL) gt_ggc_mx_module_htab_entry (X);\
- } while (0)
-extern void gt_ggc_mx_module_htab_entry (void *);
-#define gt_ggc_m_13binding_level(X) do { \
- if (X != NULL) gt_ggc_mx_binding_level (X);\
- } while (0)
-extern void gt_ggc_mx_binding_level (void *);
-#define gt_ggc_m_9opt_stack(X) do { \
- if (X != NULL) gt_ggc_mx_opt_stack (X);\
- } while (0)
-extern void gt_ggc_mx_opt_stack (void *);
-#define gt_ggc_m_16def_pragma_macro(X) do { \
- if (X != NULL) gt_ggc_mx_def_pragma_macro (X);\
- } while (0)
-extern void gt_ggc_mx_def_pragma_macro (void *);
-#define gt_ggc_m_22def_pragma_macro_value(X) do { \
- if (X != NULL) gt_ggc_mx_def_pragma_macro_value (X);\
- } while (0)
-extern void gt_ggc_mx_def_pragma_macro_value (void *);
-#define gt_ggc_m_11align_stack(X) do { \
- if (X != NULL) gt_ggc_mx_align_stack (X);\
- } while (0)
-extern void gt_ggc_mx_align_stack (void *);
-#define gt_ggc_m_18VEC_tree_gc_vec_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_tree_gc_vec_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_tree_gc_vec_gc (void *);
-#define gt_ggc_m_19VEC_const_char_p_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_const_char_p_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_const_char_p_gc (void *);
-#define gt_ggc_m_21pending_abstract_type(X) do { \
- if (X != NULL) gt_ggc_mx_pending_abstract_type (X);\
- } while (0)
-extern void gt_ggc_mx_pending_abstract_type (void *);
-#define gt_ggc_m_15VEC_tree_int_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_tree_int_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_tree_int_gc (void *);
-#define gt_ggc_m_9cp_parser(X) do { \
- if (X != NULL) gt_ggc_mx_cp_parser (X);\
- } while (0)
-extern void gt_ggc_mx_cp_parser (void *);
-#define gt_ggc_m_17cp_parser_context(X) do { \
- if (X != NULL) gt_ggc_mx_cp_parser_context (X);\
- } while (0)
-extern void gt_ggc_mx_cp_parser_context (void *);
-#define gt_ggc_m_8cp_lexer(X) do { \
- if (X != NULL) gt_ggc_mx_cp_lexer (X);\
- } while (0)
-extern void gt_ggc_mx_cp_lexer (void *);
-#define gt_ggc_m_10tree_check(X) do { \
- if (X != NULL) gt_ggc_mx_tree_check (X);\
- } while (0)
-extern void gt_ggc_mx_tree_check (void *);
-#define gt_ggc_m_22VEC_deferred_access_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_deferred_access_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_deferred_access_gc (void *);
-#define gt_ggc_m_10spec_entry(X) do { \
- if (X != NULL) gt_ggc_mx_spec_entry (X);\
- } while (0)
-extern void gt_ggc_mx_spec_entry (void *);
-#define gt_ggc_m_16pending_template(X) do { \
- if (X != NULL) gt_ggc_mx_pending_template (X);\
- } while (0)
-extern void gt_ggc_mx_pending_template (void *);
-#define gt_ggc_m_21named_label_use_entry(X) do { \
- if (X != NULL) gt_ggc_mx_named_label_use_entry (X);\
- } while (0)
-extern void gt_ggc_mx_named_label_use_entry (void *);
-#define gt_ggc_m_28VEC_deferred_access_check_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_deferred_access_check_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_deferred_access_check_gc (void *);
-#define gt_ggc_m_11tinst_level(X) do { \
- if (X != NULL) gt_ggc_mx_tinst_level (X);\
- } while (0)
-extern void gt_ggc_mx_tinst_level (void *);
-#define gt_ggc_m_18sorted_fields_type(X) do { \
- if (X != NULL) gt_ggc_mx_sorted_fields_type (X);\
- } while (0)
-extern void gt_ggc_mx_sorted_fields_type (void *);
-#define gt_ggc_m_18VEC_tree_pair_s_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_tree_pair_s_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_tree_pair_s_gc (void *);
-#define gt_ggc_m_17named_label_entry(X) do { \
- if (X != NULL) gt_ggc_mx_named_label_entry (X);\
- } while (0)
-extern void gt_ggc_mx_named_label_entry (void *);
-#define gt_ggc_m_14cp_token_cache(X) do { \
- if (X != NULL) gt_ggc_mx_cp_token_cache (X);\
- } while (0)
-extern void gt_ggc_mx_cp_token_cache (void *);
-#define gt_ggc_m_11saved_scope(X) do { \
- if (X != NULL) gt_ggc_mx_saved_scope (X);\
- } while (0)
-extern void gt_ggc_mx_saved_scope (void *);
-#define gt_ggc_m_16cxx_int_tree_map(X) do { \
- if (X != NULL) gt_ggc_mx_cxx_int_tree_map (X);\
- } while (0)
-extern void gt_ggc_mx_cxx_int_tree_map (void *);
-#define gt_ggc_m_23VEC_cp_class_binding_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_cp_class_binding_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_cp_class_binding_gc (void *);
-#define gt_ggc_m_24VEC_cxx_saved_binding_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_cxx_saved_binding_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_cxx_saved_binding_gc (void *);
-#define gt_ggc_m_16cp_binding_level(X) do { \
- if (X != NULL) gt_ggc_mx_cp_binding_level (X);\
- } while (0)
-extern void gt_ggc_mx_cp_binding_level (void *);
-#define gt_ggc_m_11cxx_binding(X) do { \
- if (X != NULL) gt_ggc_mx_cxx_binding (X);\
- } while (0)
-extern void gt_ggc_mx_cxx_binding (void *);
-#define gt_ggc_m_15binding_entry_s(X) do { \
- if (X != NULL) gt_ggc_mx_binding_entry_s (X);\
- } while (0)
-extern void gt_ggc_mx_binding_entry_s (void *);
-#define gt_ggc_m_15binding_table_s(X) do { \
- if (X != NULL) gt_ggc_mx_binding_table_s (X);\
- } while (0)
-extern void gt_ggc_mx_binding_table_s (void *);
-#define gt_ggc_m_14VEC_tinfo_s_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_tinfo_s_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_tinfo_s_gc (void *);
-#define gt_ggc_m_18gnat_binding_level(X) do { \
- if (X != NULL) gt_ggc_mx_gnat_binding_level (X);\
- } while (0)
-extern void gt_ggc_mx_gnat_binding_level (void *);
-#define gt_ggc_m_9elab_info(X) do { \
- if (X != NULL) gt_ggc_mx_elab_info (X);\
- } while (0)
-extern void gt_ggc_mx_elab_info (void *);
-#define gt_ggc_m_10stmt_group(X) do { \
- if (X != NULL) gt_ggc_mx_stmt_group (X);\
- } while (0)
-extern void gt_ggc_mx_stmt_group (void *);
-#define gt_ggc_m_16VEC_parm_attr_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_parm_attr_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_parm_attr_gc (void *);
-#define gt_ggc_m_11parm_attr_d(X) do { \
- if (X != NULL) gt_ggc_mx_parm_attr_d (X);\
- } while (0)
-extern void gt_ggc_mx_parm_attr_d (void *);
-#define gt_ggc_m_22VEC_ipa_edge_args_t_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_ipa_edge_args_t_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_ipa_edge_args_t_gc (void *);
-#define gt_ggc_m_20lto_symtab_entry_def(X) do { \
- if (X != NULL) gt_ggc_mx_lto_symtab_entry_def (X);\
- } while (0)
-extern void gt_ggc_mx_lto_symtab_entry_def (void *);
-#define gt_ggc_m_20ssa_operand_memory_d(X) do { \
- if (X != NULL) gt_ggc_mx_ssa_operand_memory_d (X);\
- } while (0)
-extern void gt_ggc_mx_ssa_operand_memory_d (void *);
-#define gt_ggc_m_13scev_info_str(X) do { \
- if (X != NULL) gt_ggc_mx_scev_info_str (X);\
- } while (0)
-extern void gt_ggc_mx_scev_info_str (void *);
-#define gt_ggc_m_13VEC_gimple_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_gimple_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_gimple_gc (void *);
-#define gt_ggc_m_9type_hash(X) do { \
- if (X != NULL) gt_ggc_mx_type_hash (X);\
- } while (0)
-extern void gt_ggc_mx_type_hash (void *);
-#define gt_ggc_m_16string_pool_data(X) do { \
- if (X != NULL) gt_ggc_mx_string_pool_data (X);\
- } while (0)
-extern void gt_ggc_mx_string_pool_data (void *);
-#define gt_ggc_m_13libfunc_entry(X) do { \
- if (X != NULL) gt_ggc_mx_libfunc_entry (X);\
- } while (0)
-extern void gt_ggc_mx_libfunc_entry (void *);
-#define gt_ggc_m_23temp_slot_address_entry(X) do { \
- if (X != NULL) gt_ggc_mx_temp_slot_address_entry (X);\
- } while (0)
-extern void gt_ggc_mx_temp_slot_address_entry (void *);
-#define gt_ggc_m_15throw_stmt_node(X) do { \
- if (X != NULL) gt_ggc_mx_throw_stmt_node (X);\
- } while (0)
-extern void gt_ggc_mx_throw_stmt_node (void *);
-#define gt_ggc_m_21VEC_eh_landing_pad_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_eh_landing_pad_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_eh_landing_pad_gc (void *);
-#define gt_ggc_m_16VEC_eh_region_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_eh_region_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_eh_region_gc (void *);
-#define gt_ggc_m_10eh_catch_d(X) do { \
- if (X != NULL) gt_ggc_mx_eh_catch_d (X);\
- } while (0)
-extern void gt_ggc_mx_eh_catch_d (void *);
-#define gt_ggc_m_16eh_landing_pad_d(X) do { \
- if (X != NULL) gt_ggc_mx_eh_landing_pad_d (X);\
- } while (0)
-extern void gt_ggc_mx_eh_landing_pad_d (void *);
-#define gt_ggc_m_11eh_region_d(X) do { \
- if (X != NULL) gt_ggc_mx_eh_region_d (X);\
- } while (0)
-extern void gt_ggc_mx_eh_region_d (void *);
-#define gt_ggc_m_10vcall_insn(X) do { \
- if (X != NULL) gt_ggc_mx_vcall_insn (X);\
- } while (0)
-extern void gt_ggc_mx_vcall_insn (void *);
-#define gt_ggc_m_18VEC_vcall_entry_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_vcall_entry_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_vcall_entry_gc (void *);
-#define gt_ggc_m_18VEC_dcall_entry_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_dcall_entry_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_dcall_entry_gc (void *);
-#define gt_ggc_m_16var_loc_list_def(X) do { \
- if (X != NULL) gt_ggc_mx_var_loc_list_def (X);\
- } while (0)
-extern void gt_ggc_mx_var_loc_list_def (void *);
-#define gt_ggc_m_12var_loc_node(X) do { \
- if (X != NULL) gt_ggc_mx_var_loc_node (X);\
- } while (0)
-extern void gt_ggc_mx_var_loc_node (void *);
-#define gt_ggc_m_20VEC_die_arg_entry_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_die_arg_entry_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_die_arg_entry_gc (void *);
-#define gt_ggc_m_16limbo_die_struct(X) do { \
- if (X != NULL) gt_ggc_mx_limbo_die_struct (X);\
- } while (0)
-extern void gt_ggc_mx_limbo_die_struct (void *);
-#define gt_ggc_m_20VEC_pubname_entry_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_pubname_entry_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_pubname_entry_gc (void *);
-#define gt_ggc_m_19VEC_dw_attr_node_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_dw_attr_node_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_dw_attr_node_gc (void *);
-#define gt_ggc_m_18comdat_type_struct(X) do { \
- if (X != NULL) gt_ggc_mx_comdat_type_struct (X);\
- } while (0)
-extern void gt_ggc_mx_comdat_type_struct (void *);
-#define gt_ggc_m_25dw_ranges_by_label_struct(X) do { \
- if (X != NULL) gt_ggc_mx_dw_ranges_by_label_struct (X);\
- } while (0)
-extern void gt_ggc_mx_dw_ranges_by_label_struct (void *);
-#define gt_ggc_m_16dw_ranges_struct(X) do { \
- if (X != NULL) gt_ggc_mx_dw_ranges_struct (X);\
- } while (0)
-extern void gt_ggc_mx_dw_ranges_struct (void *);
-#define gt_ggc_m_28dw_separate_line_info_struct(X) do { \
- if (X != NULL) gt_ggc_mx_dw_separate_line_info_struct (X);\
- } while (0)
-extern void gt_ggc_mx_dw_separate_line_info_struct (void *);
-#define gt_ggc_m_19dw_line_info_struct(X) do { \
- if (X != NULL) gt_ggc_mx_dw_line_info_struct (X);\
- } while (0)
-extern void gt_ggc_mx_dw_line_info_struct (void *);
-#define gt_ggc_m_25VEC_deferred_locations_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_deferred_locations_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_deferred_locations_gc (void *);
-#define gt_ggc_m_18dw_loc_list_struct(X) do { \
- if (X != NULL) gt_ggc_mx_dw_loc_list_struct (X);\
- } while (0)
-extern void gt_ggc_mx_dw_loc_list_struct (void *);
-#define gt_ggc_m_15dwarf_file_data(X) do { \
- if (X != NULL) gt_ggc_mx_dwarf_file_data (X);\
- } while (0)
-extern void gt_ggc_mx_dwarf_file_data (void *);
-#define gt_ggc_m_15queued_reg_save(X) do { \
- if (X != NULL) gt_ggc_mx_queued_reg_save (X);\
- } while (0)
-extern void gt_ggc_mx_queued_reg_save (void *);
-#define gt_ggc_m_20indirect_string_node(X) do { \
- if (X != NULL) gt_ggc_mx_indirect_string_node (X);\
- } while (0)
-extern void gt_ggc_mx_indirect_string_node (void *);
-#define gt_ggc_m_19dw_loc_descr_struct(X) do { \
- if (X != NULL) gt_ggc_mx_dw_loc_descr_struct (X);\
- } while (0)
-extern void gt_ggc_mx_dw_loc_descr_struct (void *);
-#define gt_ggc_m_13dw_fde_struct(X) do { \
- if (X != NULL) gt_ggc_mx_dw_fde_struct (X);\
- } while (0)
-extern void gt_ggc_mx_dw_fde_struct (void *);
-#define gt_ggc_m_13dw_cfi_struct(X) do { \
- if (X != NULL) gt_ggc_mx_dw_cfi_struct (X);\
- } while (0)
-extern void gt_ggc_mx_dw_cfi_struct (void *);
-#define gt_ggc_m_8typeinfo(X) do { \
- if (X != NULL) gt_ggc_mx_typeinfo (X);\
- } while (0)
-extern void gt_ggc_mx_typeinfo (void *);
-#define gt_ggc_m_22VEC_alias_set_entry_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_alias_set_entry_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_alias_set_entry_gc (void *);
-#define gt_ggc_m_17alias_set_entry_d(X) do { \
- if (X != NULL) gt_ggc_mx_alias_set_entry_d (X);\
- } while (0)
-extern void gt_ggc_mx_alias_set_entry_d (void *);
-#define gt_ggc_m_24constant_descriptor_tree(X) do { \
- if (X != NULL) gt_ggc_mx_constant_descriptor_tree (X);\
- } while (0)
-extern void gt_ggc_mx_constant_descriptor_tree (void *);
-#define gt_ggc_m_15cgraph_asm_node(X) do { \
- if (X != NULL) gt_ggc_mx_cgraph_asm_node (X);\
- } while (0)
-extern void gt_ggc_mx_cgraph_asm_node (void *);
-#define gt_ggc_m_12varpool_node(X) do { \
- if (X != NULL) gt_ggc_mx_varpool_node (X);\
- } while (0)
-extern void gt_ggc_mx_varpool_node (void *);
-#define gt_ggc_m_22VEC_cgraph_node_set_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_cgraph_node_set_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_cgraph_node_set_gc (void *);
-#define gt_ggc_m_19cgraph_node_set_def(X) do { \
- if (X != NULL) gt_ggc_mx_cgraph_node_set_def (X);\
- } while (0)
-extern void gt_ggc_mx_cgraph_node_set_def (void *);
-#define gt_ggc_m_27cgraph_node_set_element_def(X) do { \
- if (X != NULL) gt_ggc_mx_cgraph_node_set_element_def (X);\
- } while (0)
-extern void gt_ggc_mx_cgraph_node_set_element_def (void *);
-#define gt_ggc_m_22VEC_cgraph_node_ptr_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_cgraph_node_ptr_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_cgraph_node_ptr_gc (void *);
-#define gt_ggc_m_11cgraph_edge(X) do { \
- if (X != NULL) gt_ggc_mx_cgraph_edge (X);\
- } while (0)
-extern void gt_ggc_mx_cgraph_edge (void *);
-#define gt_ggc_m_24VEC_ipa_replace_map_p_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_ipa_replace_map_p_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_ipa_replace_map_p_gc (void *);
-#define gt_ggc_m_15ipa_replace_map(X) do { \
- if (X != NULL) gt_ggc_mx_ipa_replace_map (X);\
- } while (0)
-extern void gt_ggc_mx_ipa_replace_map (void *);
-#define gt_ggc_m_11cgraph_node(X) do { \
- if (X != NULL) gt_ggc_mx_cgraph_node (X);\
- } while (0)
-extern void gt_ggc_mx_cgraph_node (void *);
-#define gt_ggc_m_18VEC_basic_block_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_basic_block_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_basic_block_gc (void *);
-#define gt_ggc_m_14gimple_bb_info(X) do { \
- if (X != NULL) gt_ggc_mx_gimple_bb_info (X);\
- } while (0)
-extern void gt_ggc_mx_gimple_bb_info (void *);
-#define gt_ggc_m_11rtl_bb_info(X) do { \
- if (X != NULL) gt_ggc_mx_rtl_bb_info (X);\
- } while (0)
-extern void gt_ggc_mx_rtl_bb_info (void *);
-#define gt_ggc_m_11VEC_edge_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_edge_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_edge_gc (void *);
-#define gt_ggc_m_17cselib_val_struct(X) do { \
- if (X != NULL) gt_ggc_mx_cselib_val_struct (X);\
- } while (0)
-extern void gt_ggc_mx_cselib_val_struct (void *);
-#define gt_ggc_m_12elt_loc_list(X) do { \
- if (X != NULL) gt_ggc_mx_elt_loc_list (X);\
- } while (0)
-extern void gt_ggc_mx_elt_loc_list (void *);
-#define gt_ggc_m_13VEC_loop_p_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_loop_p_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_loop_p_gc (void *);
-#define gt_ggc_m_4loop(X) do { \
- if (X != NULL) gt_ggc_mx_loop (X);\
- } while (0)
-extern void gt_ggc_mx_loop (void *);
-#define gt_ggc_m_9loop_exit(X) do { \
- if (X != NULL) gt_ggc_mx_loop_exit (X);\
- } while (0)
-extern void gt_ggc_mx_loop_exit (void *);
-#define gt_ggc_m_13nb_iter_bound(X) do { \
- if (X != NULL) gt_ggc_mx_nb_iter_bound (X);\
- } while (0)
-extern void gt_ggc_mx_nb_iter_bound (void *);
-#define gt_ggc_m_24types_used_by_vars_entry(X) do { \
- if (X != NULL) gt_ggc_mx_types_used_by_vars_entry (X);\
- } while (0)
-extern void gt_ggc_mx_types_used_by_vars_entry (void *);
-#define gt_ggc_m_17language_function(X) do { \
- if (X != NULL) gt_ggc_mx_language_function (X);\
- } while (0)
-extern void gt_ggc_mx_language_function (void *);
-#define gt_ggc_m_5loops(X) do { \
- if (X != NULL) gt_ggc_mx_loops (X);\
- } while (0)
-extern void gt_ggc_mx_loops (void *);
-#define gt_ggc_m_18control_flow_graph(X) do { \
- if (X != NULL) gt_ggc_mx_control_flow_graph (X);\
- } while (0)
-extern void gt_ggc_mx_control_flow_graph (void *);
-#define gt_ggc_m_9eh_status(X) do { \
- if (X != NULL) gt_ggc_mx_eh_status (X);\
- } while (0)
-extern void gt_ggc_mx_eh_status (void *);
-#define gt_ggc_m_20initial_value_struct(X) do { \
- if (X != NULL) gt_ggc_mx_initial_value_struct (X);\
- } while (0)
-extern void gt_ggc_mx_initial_value_struct (void *);
-#define gt_ggc_m_17rtx_constant_pool(X) do { \
- if (X != NULL) gt_ggc_mx_rtx_constant_pool (X);\
- } while (0)
-extern void gt_ggc_mx_rtx_constant_pool (void *);
-#define gt_ggc_m_18VEC_temp_slot_p_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_temp_slot_p_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_temp_slot_p_gc (void *);
-#define gt_ggc_m_9temp_slot(X) do { \
- if (X != NULL) gt_ggc_mx_temp_slot (X);\
- } while (0)
-extern void gt_ggc_mx_temp_slot (void *);
-#define gt_ggc_m_9gimple_df(X) do { \
- if (X != NULL) gt_ggc_mx_gimple_df (X);\
- } while (0)
-extern void gt_ggc_mx_gimple_df (void *);
-#define gt_ggc_m_23VEC_call_site_record_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_call_site_record_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_call_site_record_gc (void *);
-#define gt_ggc_m_18call_site_record_d(X) do { \
- if (X != NULL) gt_ggc_mx_call_site_record_d (X);\
- } while (0)
-extern void gt_ggc_mx_call_site_record_d (void *);
-#define gt_ggc_m_14sequence_stack(X) do { \
- if (X != NULL) gt_ggc_mx_sequence_stack (X);\
- } while (0)
-extern void gt_ggc_mx_sequence_stack (void *);
-#define gt_ggc_m_8elt_list(X) do { \
- if (X != NULL) gt_ggc_mx_elt_list (X);\
- } while (0)
-extern void gt_ggc_mx_elt_list (void *);
-#define gt_ggc_m_17tree_priority_map(X) do { \
- if (X != NULL) gt_ggc_mx_tree_priority_map (X);\
- } while (0)
-extern void gt_ggc_mx_tree_priority_map (void *);
-#define gt_ggc_m_12tree_int_map(X) do { \
- if (X != NULL) gt_ggc_mx_tree_int_map (X);\
- } while (0)
-extern void gt_ggc_mx_tree_int_map (void *);
-#define gt_ggc_m_8tree_map(X) do { \
- if (X != NULL) gt_ggc_mx_tree_map (X);\
- } while (0)
-extern void gt_ggc_mx_tree_map (void *);
-#define gt_ggc_m_14lang_tree_node(X) do { \
- if (X != NULL) gt_ggc_mx_lang_tree_node (X);\
- } while (0)
-extern void gt_ggc_mx_lang_tree_node (void *);
-#define gt_ggc_m_24tree_statement_list_node(X) do { \
- if (X != NULL) gt_ggc_mx_tree_statement_list_node (X);\
- } while (0)
-extern void gt_ggc_mx_tree_statement_list_node (void *);
-#define gt_ggc_m_9lang_decl(X) do { \
- if (X != NULL) gt_ggc_mx_lang_decl (X);\
- } while (0)
-extern void gt_ggc_mx_lang_decl (void *);
-#define gt_ggc_m_9lang_type(X) do { \
- if (X != NULL) gt_ggc_mx_lang_type (X);\
- } while (0)
-extern void gt_ggc_mx_lang_type (void *);
-#define gt_ggc_m_10die_struct(X) do { \
- if (X != NULL) gt_ggc_mx_die_struct (X);\
- } while (0)
-extern void gt_ggc_mx_die_struct (void *);
-#define gt_ggc_m_15varray_head_tag(X) do { \
- if (X != NULL) gt_ggc_mx_varray_head_tag (X);\
- } while (0)
-extern void gt_ggc_mx_varray_head_tag (void *);
-#define gt_ggc_m_12ptr_info_def(X) do { \
- if (X != NULL) gt_ggc_mx_ptr_info_def (X);\
- } while (0)
-extern void gt_ggc_mx_ptr_info_def (void *);
-#define gt_ggc_m_22VEC_constructor_elt_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_constructor_elt_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_constructor_elt_gc (void *);
-#define gt_ggc_m_10tree_ann_d(X) do { \
- if (X != NULL) gt_ggc_mx_tree_ann_d (X);\
- } while (0)
-extern void gt_ggc_mx_tree_ann_d (void *);
-#define gt_ggc_m_17VEC_alias_pair_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_alias_pair_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_alias_pair_gc (void *);
-#define gt_ggc_m_11VEC_tree_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_tree_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_tree_gc (void *);
-#define gt_ggc_m_12VEC_uchar_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_uchar_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_uchar_gc (void *);
-#define gt_ggc_m_8function(X) do { \
- if (X != NULL) gt_ggc_mx_function (X);\
- } while (0)
-extern void gt_ggc_mx_function (void *);
-#define gt_ggc_m_23constant_descriptor_rtx(X) do { \
- if (X != NULL) gt_ggc_mx_constant_descriptor_rtx (X);\
- } while (0)
-extern void gt_ggc_mx_constant_descriptor_rtx (void *);
-#define gt_ggc_m_11fixed_value(X) do { \
- if (X != NULL) gt_ggc_mx_fixed_value (X);\
- } while (0)
-extern void gt_ggc_mx_fixed_value (void *);
-#define gt_ggc_m_10real_value(X) do { \
- if (X != NULL) gt_ggc_mx_real_value (X);\
- } while (0)
-extern void gt_ggc_mx_real_value (void *);
-#define gt_ggc_m_10VEC_rtx_gc(X) do { \
- if (X != NULL) gt_ggc_mx_VEC_rtx_gc (X);\
- } while (0)
-extern void gt_ggc_mx_VEC_rtx_gc (void *);
-#define gt_ggc_m_12object_block(X) do { \
- if (X != NULL) gt_ggc_mx_object_block (X);\
- } while (0)
-extern void gt_ggc_mx_object_block (void *);
-#define gt_ggc_m_9reg_attrs(X) do { \
- if (X != NULL) gt_ggc_mx_reg_attrs (X);\
- } while (0)
-extern void gt_ggc_mx_reg_attrs (void *);
-#define gt_ggc_m_9mem_attrs(X) do { \
- if (X != NULL) gt_ggc_mx_mem_attrs (X);\
- } while (0)
-extern void gt_ggc_mx_mem_attrs (void *);
-#define gt_ggc_m_14bitmap_obstack(X) do { \
- if (X != NULL) gt_ggc_mx_bitmap_obstack (X);\
- } while (0)
-extern void gt_ggc_mx_bitmap_obstack (void *);
-#define gt_ggc_m_18bitmap_element_def(X) do { \
- if (X != NULL) gt_ggc_mx_bitmap_element_def (X);\
- } while (0)
-extern void gt_ggc_mx_bitmap_element_def (void *);
-#define gt_ggc_m_16machine_function(X) do { \
- if (X != NULL) gt_ggc_mx_machine_function (X);\
- } while (0)
-extern void gt_ggc_mx_machine_function (void *);
-#define gt_ggc_m_17stack_local_entry(X) do { \
- if (X != NULL) gt_ggc_mx_stack_local_entry (X);\
- } while (0)
-extern void gt_ggc_mx_stack_local_entry (void *);
-#define gt_ggc_m_15basic_block_def(X) do { \
- if (X != NULL) gt_ggc_mx_basic_block_def (X);\
- } while (0)
-extern void gt_ggc_mx_basic_block_def (void *);
-#define gt_ggc_m_8edge_def(X) do { \
- if (X != NULL) gt_ggc_mx_edge_def (X);\
- } while (0)
-extern void gt_ggc_mx_edge_def (void *);
-#define gt_ggc_m_17gimple_seq_node_d(X) do { \
- if (X != NULL) gt_ggc_mx_gimple_seq_node_d (X);\
- } while (0)
-extern void gt_ggc_mx_gimple_seq_node_d (void *);
-#define gt_ggc_m_12gimple_seq_d(X) do { \
- if (X != NULL) gt_ggc_mx_gimple_seq_d (X);\
- } while (0)
-extern void gt_ggc_mx_gimple_seq_d (void *);
-#define gt_ggc_m_7section(X) do { \
- if (X != NULL) gt_ggc_mx_section (X);\
- } while (0)
-extern void gt_ggc_mx_section (void *);
-#define gt_ggc_m_18gimple_statement_d(X) do { \
- if (X != NULL) gt_ggc_mx_gimple_statement_d (X);\
- } while (0)
-extern void gt_ggc_mx_gimple_statement_d (void *);
-#define gt_ggc_m_9rtvec_def(X) do { \
- if (X != NULL) gt_ggc_mx_rtvec_def (X);\
- } while (0)
-extern void gt_ggc_mx_rtvec_def (void *);
-#define gt_ggc_m_7rtx_def(X) do { \
- if (X != NULL) gt_ggc_mx_rtx_def (X);\
- } while (0)
-extern void gt_ggc_mx_rtx_def (void *);
-#define gt_ggc_m_15bitmap_head_def(X) do { \
- if (X != NULL) gt_ggc_mx_bitmap_head_def (X);\
- } while (0)
-extern void gt_ggc_mx_bitmap_head_def (void *);
-#define gt_ggc_m_9tree_node(X) do { \
- if (X != NULL) gt_ggc_mx_tree_node (X);\
- } while (0)
-#define gt_ggc_mx_tree_node gt_ggc_mx_lang_tree_node
-#define gt_ggc_m_6answer(X) do { \
- if (X != NULL) gt_ggc_mx_answer (X);\
- } while (0)
-extern void gt_ggc_mx_answer (void *);
-#define gt_ggc_m_9cpp_macro(X) do { \
- if (X != NULL) gt_ggc_mx_cpp_macro (X);\
- } while (0)
-extern void gt_ggc_mx_cpp_macro (void *);
-#define gt_ggc_m_9cpp_token(X) do { \
- if (X != NULL) gt_ggc_mx_cpp_token (X);\
- } while (0)
-extern void gt_ggc_mx_cpp_token (void *);
-#define gt_ggc_m_9line_maps(X) do { \
- if (X != NULL) gt_ggc_mx_line_maps (X);\
- } while (0)
-extern void gt_ggc_mx_line_maps (void *);
-extern void gt_ggc_m_II17splay_tree_node_s (void *);
-extern void gt_ggc_m_SP9tree_node17splay_tree_node_s (void *);
-extern void gt_ggc_m_P9tree_nodeP9tree_node17splay_tree_node_s (void *);
-extern void gt_ggc_m_IP9tree_node17splay_tree_node_s (void *);
-extern void gt_ggc_m_P13tree_llvm_map4htab (void *);
-extern void gt_ggc_m_P15interface_tuple4htab (void *);
-extern void gt_ggc_m_P16volatilized_type4htab (void *);
-extern void gt_ggc_m_P17string_descriptor4htab (void *);
-extern void gt_ggc_m_P14type_assertion4htab (void *);
-extern void gt_ggc_m_P18treetreehash_entry4htab (void *);
-extern void gt_ggc_m_P17module_htab_entry4htab (void *);
-extern void gt_ggc_m_P16def_pragma_macro4htab (void *);
-extern void gt_ggc_m_P21pending_abstract_type4htab (void *);
-extern void gt_ggc_m_P10spec_entry4htab (void *);
-extern void gt_ggc_m_P16cxx_int_tree_map4htab (void *);
-extern void gt_ggc_m_P17named_label_entry4htab (void *);
-extern void gt_ggc_m_P12tree_int_map4htab (void *);
-extern void gt_ggc_m_P20lto_symtab_entry_def4htab (void *);
-extern void gt_ggc_m_IP9tree_node12splay_tree_s (void *);
-extern void gt_ggc_m_P9tree_nodeP9tree_node12splay_tree_s (void *);
-extern void gt_ggc_m_P12varpool_node4htab (void *);
-extern void gt_ggc_m_P13scev_info_str4htab (void *);
-extern void gt_ggc_m_P23constant_descriptor_rtx4htab (void *);
-extern void gt_ggc_m_P24constant_descriptor_tree4htab (void *);
-extern void gt_ggc_m_P12object_block4htab (void *);
-extern void gt_ggc_m_P7section4htab (void *);
-extern void gt_ggc_m_P17tree_priority_map4htab (void *);
-extern void gt_ggc_m_P8tree_map4htab (void *);
-extern void gt_ggc_m_P9type_hash4htab (void *);
-extern void gt_ggc_m_P13libfunc_entry4htab (void *);
-extern void gt_ggc_m_P23temp_slot_address_entry4htab (void *);
-extern void gt_ggc_m_P15throw_stmt_node4htab (void *);
-extern void gt_ggc_m_P9reg_attrs4htab (void *);
-extern void gt_ggc_m_P9mem_attrs4htab (void *);
-extern void gt_ggc_m_P7rtx_def4htab (void *);
-extern void gt_ggc_m_SP9tree_node12splay_tree_s (void *);
-extern void gt_ggc_m_P10vcall_insn4htab (void *);
-extern void gt_ggc_m_P16var_loc_list_def4htab (void *);
-extern void gt_ggc_m_P10die_struct4htab (void *);
-extern void gt_ggc_m_P15dwarf_file_data4htab (void *);
-extern void gt_ggc_m_P20indirect_string_node4htab (void *);
-extern void gt_ggc_m_P11cgraph_node4htab (void *);
-extern void gt_ggc_m_II12splay_tree_s (void *);
-extern void gt_ggc_m_P27cgraph_node_set_element_def4htab (void *);
-extern void gt_ggc_m_P11cgraph_edge4htab (void *);
-extern void gt_ggc_m_P9loop_exit4htab (void *);
-extern void gt_ggc_m_P24types_used_by_vars_entry4htab (void *);
-extern void gt_ggc_m_P9tree_node4htab (void *);
-
-/* functions code */
-
-void
-gt_ggc_mx_tree_llvm_map (void *x_p)
-{
- struct tree_llvm_map * const x = (struct tree_llvm_map *)x_p;
- if (ggc_test_and_set_mark (x))
- {
- gt_ggc_m_9tree_node ((*x).base.from);
- }
-}
-
-void
-gt_ggc_m_P13tree_llvm_map4htab (void *x_p)
-{
- struct htab * const x = (struct htab *)x_p;
- if (ggc_test_and_set_mark (x))
- {
- if ((*x).entries != NULL) {
- size_t i0;
- for (i0 = 0; i0 != (size_t)(((*x)).size); i0++) {
- gt_ggc_m_13tree_llvm_map ((*x).entries[i0]);
- }
- ggc_mark ((*x).entries);
- }
- }
-}
-
-/* GC roots. */
-
-EXPORTED_CONST struct ggc_cache_tab gt_ggc_rc__gt_llvm_cache_h[] = {
- {
- &llvm_cache,
- 1,
- sizeof (llvm_cache),
- >_ggc_mx_tree_llvm_map,
- NULL,
- &tree_llvm_map_marked_p
- },
- LAST_GGC_CACHE_TAB
-};
-
Copied: dragonegg/trunk/linux/OS.h (from r127403, dragonegg/trunk/linux/llvm-os.h)
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/linux/OS.h?p2=dragonegg/trunk/linux/OS.h&p1=dragonegg/trunk/linux/llvm-os.h&r1=127403&r2=127408&rev=127408&view=diff
==============================================================================
--- dragonegg/trunk/linux/llvm-os.h (original)
+++ dragonegg/trunk/linux/OS.h Thu Mar 10 10:05:54 2011
@@ -1,4 +1,4 @@
-//===---------- llvm-os.h - Linux specific definitions ----------*- C++ -*-===//
+//===------------- OS.h - Linux specific definitions ------------*- C++ -*-===//
//
// Copyright (C) 2009, 2010, 2011 Duncan Sands et al.
//
@@ -20,8 +20,8 @@
// This file provides Linux specific declarations.
//===----------------------------------------------------------------------===//
-#ifndef LLVM_OS_H
-#define LLVM_OS_H
+#ifndef DRAGONEGG_OS_H
+#define DRAGONEGG_OS_H
/* Yes, we support PIC codegen for linux targets! */
#define LLVM_SET_TARGET_OPTIONS(argvec) \
@@ -30,4 +30,4 @@
else \
argvec.push_back ("--relocation-model=static");
-#endif /* LLVM_OS_H */
+#endif /* DRAGONEGG_OS_H */
Removed: dragonegg/trunk/linux/llvm-os.h
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/linux/llvm-os.h?rev=127407&view=auto
==============================================================================
--- dragonegg/trunk/linux/llvm-os.h (original)
+++ dragonegg/trunk/linux/llvm-os.h (removed)
@@ -1,33 +0,0 @@
-//===---------- llvm-os.h - Linux specific definitions ----------*- C++ -*-===//
-//
-// Copyright (C) 2009, 2010, 2011 Duncan Sands et al.
-//
-// This file is part of DragonEgg.
-//
-// DragonEgg is free software; you can redistribute it and/or modify it under
-// the terms of the GNU General Public License as published by the Free Software
-// Foundation; either version 2, or (at your option) any later version.
-//
-// DragonEgg is distributed in the hope that it will be useful, but WITHOUT ANY
-// WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
-// A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-//
-// You should have received a copy of the GNU General Public License along with
-// DragonEgg; see the file COPYING. If not, write to the Free Software
-// Foundation, 51 Franklin Street, Suite 500, Boston, MA 02110-1335, USA.
-//
-//===----------------------------------------------------------------------===//
-// This file provides Linux specific declarations.
-//===----------------------------------------------------------------------===//
-
-#ifndef LLVM_OS_H
-#define LLVM_OS_H
-
-/* Yes, we support PIC codegen for linux targets! */
-#define LLVM_SET_TARGET_OPTIONS(argvec) \
- if (flag_pic) \
- argvec.push_back ("--relocation-model=pic"); \
- else \
- argvec.push_back ("--relocation-model=static");
-
-#endif /* LLVM_OS_H */
Removed: dragonegg/trunk/llvm-abi-default.cpp
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/llvm-abi-default.cpp?rev=127407&view=auto
==============================================================================
--- dragonegg/trunk/llvm-abi-default.cpp (original)
+++ dragonegg/trunk/llvm-abi-default.cpp (removed)
@@ -1,465 +0,0 @@
-//===--------- llvm-abi-default.cpp - Default ABI implementation ----------===//
-//
-// Copyright (C) 2010, 2011 Rafael Espindola, Duncan Sands et al.
-//
-// This file is part of DragonEgg.
-//
-// DragonEgg is free software; you can redistribute it and/or modify it under
-// the terms of the GNU General Public License as published by the Free Software
-// Foundation; either version 2, or (at your option) any later version.
-//
-// DragonEgg is distributed in the hope that it will be useful, but WITHOUT ANY
-// WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
-// A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-//
-// You should have received a copy of the GNU General Public License along with
-// DragonEgg; see the file COPYING. If not, write to the Free Software
-// Foundation, 51 Franklin Street, Suite 500, Boston, MA 02110-1335, USA.
-//
-//===----------------------------------------------------------------------===//
-// This file implements the default ABI.
-//===----------------------------------------------------------------------===//
-
-// Plugin headers
-#include "llvm-abi.h"
-
-// System headers
-#include <gmp.h>
-
-// GCC headers
-extern "C" {
-#include "config.h"
-// Stop GCC declaring 'getopt' as it can clash with the system's declaration.
-#undef HAVE_DECL_GETOPT
-#include "system.h"
-#include "coretypes.h"
-#include "target.h"
-#include "tree.h"
-}
-
-// doNotUseShadowReturn - Return true if the specified GCC type
-// should not be returned using a pointer to struct parameter.
-bool doNotUseShadowReturn(tree type, tree fndecl, CallingConv::ID /*CC*/) {
- if (!TYPE_SIZE(type))
- return false;
- if (TREE_CODE(TYPE_SIZE(type)) != INTEGER_CST)
- return false;
- // LLVM says do not use shadow argument.
- if (LLVM_SHOULD_NOT_RETURN_COMPLEX_IN_MEMORY(type) ||
- LLVM_SHOULD_NOT_USE_SHADOW_RETURN(type, CC))
- return true;
- // GCC says use shadow argument.
- if (aggregate_value_p(type, fndecl))
- return false;
- return true;
-}
-
-/// isSingleElementStructOrArray - If this is (recursively) a structure with one
-/// field or an array with one element, return the field type, otherwise return
-/// null. Returns null for complex number types. If ignoreZeroLength, the
-/// struct (recursively) may include zero-length fields in addition to the
-/// single element that has data. If rejectFatBitField, and the single element
-/// is a bitfield of a type that's bigger than the struct, return null anyway.
-tree isSingleElementStructOrArray(tree type, bool ignoreZeroLength,
- bool rejectFatBitfield) {
- // Complex numbers have two fields.
- if (TREE_CODE(type) == COMPLEX_TYPE) return 0;
- // All other scalars are good.
- if (!AGGREGATE_TYPE_P(type)) return type;
-
- tree FoundField = 0;
- switch (TREE_CODE(type)) {
- case QUAL_UNION_TYPE:
- case UNION_TYPE: // Single element unions don't count.
- case COMPLEX_TYPE: // Complex values are like 2-element records.
- default:
- return 0;
- case RECORD_TYPE:
- // If this record has variable length, reject it.
- if (TREE_CODE(TYPE_SIZE(type)) != INTEGER_CST)
- return 0;
-
- for (tree Field = TYPE_FIELDS(type); Field; Field = TREE_CHAIN(Field))
- if (TREE_CODE(Field) == FIELD_DECL) {
- if (ignoreZeroLength) {
- if (DECL_SIZE(Field) &&
- TREE_CODE(DECL_SIZE(Field)) == INTEGER_CST &&
- TREE_INT_CST_LOW(DECL_SIZE(Field)) == 0)
- continue;
- }
- if (!FoundField) {
- if (rejectFatBitfield &&
- TREE_CODE(TYPE_SIZE(type)) == INTEGER_CST &&
- TREE_INT_CST_LOW(TYPE_SIZE(TREE_TYPE(Field))) >
- TREE_INT_CST_LOW(TYPE_SIZE(type)))
- return 0;
- FoundField = TREE_TYPE(Field);
- } else {
- return 0; // More than one field.
- }
- }
- return FoundField ? isSingleElementStructOrArray(FoundField,
- ignoreZeroLength, false)
- : 0;
- case ARRAY_TYPE:
- const ArrayType *Ty = dyn_cast<ArrayType>(ConvertType(type));
- if (!Ty || Ty->getNumElements() != 1)
- return 0;
- return isSingleElementStructOrArray(TREE_TYPE(type), false, false);
- }
-}
-
-/// isZeroSizedStructOrUnion - Returns true if this is a struct or union
-/// which is zero bits wide.
-bool isZeroSizedStructOrUnion(tree type) {
- if (TREE_CODE(type) != RECORD_TYPE &&
- TREE_CODE(type) != UNION_TYPE &&
- TREE_CODE(type) != QUAL_UNION_TYPE)
- return false;
- return int_size_in_bytes(type) == 0;
-}
-
-DefaultABI::DefaultABI(DefaultABIClient &c) : C(c) {}
-
-bool DefaultABI::isShadowReturn() const { return C.isShadowReturn(); }
-
-/// HandleReturnType - This is invoked by the target-independent code for the
-/// return type. It potentially breaks down the argument and invokes methods
-/// on the client that indicate how its pieces should be handled. This
-/// handles things like returning structures via hidden parameters.
-void DefaultABI::HandleReturnType(tree type, tree fn, bool isBuiltin) {
- unsigned Offset = 0;
- const Type *Ty = ConvertType(type);
- if (Ty->isVectorTy()) {
- // Vector handling is weird on x86. In particular builtin and
- // non-builtin function of the same return types can use different
- // calling conventions.
- tree ScalarType = LLVM_SHOULD_RETURN_VECTOR_AS_SCALAR(type, isBuiltin);
- if (ScalarType)
- C.HandleAggregateResultAsScalar(ConvertType(ScalarType));
- else if (LLVM_SHOULD_RETURN_VECTOR_AS_SHADOW(type, isBuiltin))
- C.HandleScalarShadowResult(Ty->getPointerTo(), false);
- else
- C.HandleScalarResult(Ty);
- } else if (Ty->isSingleValueType() || Ty->isVoidTy()) {
- // Return scalar values normally.
- C.HandleScalarResult(Ty);
- } else if (doNotUseShadowReturn(type, fn, C.getCallingConv())) {
- tree SingleElt = LLVM_SHOULD_RETURN_SELT_STRUCT_AS_SCALAR(type);
- if (SingleElt && TYPE_SIZE(SingleElt) &&
- TREE_CODE(TYPE_SIZE(SingleElt)) == INTEGER_CST &&
- TREE_INT_CST_LOW(TYPE_SIZE_UNIT(type)) ==
- TREE_INT_CST_LOW(TYPE_SIZE_UNIT(SingleElt))) {
- C.HandleAggregateResultAsScalar(ConvertType(SingleElt));
- } else {
- // Otherwise return as an integer value large enough to hold the entire
- // aggregate.
- if (const Type *AggrTy = LLVM_AGGR_TYPE_FOR_STRUCT_RETURN(type,
- C.getCallingConv()))
- C.HandleAggregateResultAsAggregate(AggrTy);
- else if (const Type* ScalarTy =
- LLVM_SCALAR_TYPE_FOR_STRUCT_RETURN(type, &Offset))
- C.HandleAggregateResultAsScalar(ScalarTy, Offset);
- else {
- assert(0 && "Unable to determine how to return this aggregate!");
- abort();
- }
- }
- } else {
- // If the function is returning a struct or union, we pass the pointer to
- // the struct as the first argument to the function.
-
- // FIXME: should return the hidden first argument for some targets
- // (e.g. ELF i386).
- if (AGGREGATE_TYPE_P(type))
- C.HandleAggregateShadowResult(Ty->getPointerTo(), false);
- else
- C.HandleScalarShadowResult(Ty->getPointerTo(), false);
- }
-}
-
-/// HandleArgument - This is invoked by the target-independent code for each
-/// argument type passed into the function. It potentially breaks down the
-/// argument and invokes methods on the client that indicate how its pieces
-/// should be handled. This handles things like decimating structures into
-/// their fields.
-void DefaultABI::HandleArgument(tree type, std::vector<const Type*> &ScalarElts,
- Attributes *Attributes) {
- unsigned Size = 0;
- bool DontCheckAlignment = false;
- const Type *Ty = ConvertType(type);
- // Figure out if this field is zero bits wide, e.g. {} or [0 x int]. Do
- // not include variable sized fields here.
- std::vector<const Type*> Elts;
- if (Ty->isVoidTy()) {
- // Handle void explicitly as an opaque type.
- const Type *OpTy = OpaqueType::get(getGlobalContext());
- C.HandleScalarArgument(OpTy, type);
- ScalarElts.push_back(OpTy);
- } else if (isPassedByInvisibleReference(type)) { // variable size -> by-ref.
- const Type *PtrTy = Ty->getPointerTo();
- C.HandleByInvisibleReferenceArgument(PtrTy, type);
- ScalarElts.push_back(PtrTy);
- } else if (Ty->isVectorTy()) {
- if (LLVM_SHOULD_PASS_VECTOR_IN_INTEGER_REGS(type)) {
- PassInIntegerRegisters(type, ScalarElts, 0, false);
- } else if (LLVM_SHOULD_PASS_VECTOR_USING_BYVAL_ATTR(type)) {
- C.HandleByValArgument(Ty, type);
- if (Attributes) {
- *Attributes |= Attribute::ByVal;
- *Attributes |=
- Attribute::constructAlignmentFromInt(LLVM_BYVAL_ALIGNMENT(type));
- }
- } else {
- C.HandleScalarArgument(Ty, type);
- ScalarElts.push_back(Ty);
- }
- } else if (LLVM_TRY_PASS_AGGREGATE_CUSTOM(type, ScalarElts,
- C.getCallingConv(), &C)) {
- // Nothing to do.
- } else if (Ty->isSingleValueType()) {
- C.HandleScalarArgument(Ty, type);
- ScalarElts.push_back(Ty);
- } else if (LLVM_SHOULD_PASS_AGGREGATE_AS_FCA(type, Ty)) {
- C.HandleFCAArgument(Ty, type);
- } else if (LLVM_SHOULD_PASS_AGGREGATE_IN_MIXED_REGS(type, Ty,
- C.getCallingConv(),
- Elts)) {
- if (!LLVM_AGGREGATE_PARTIALLY_PASSED_IN_REGS(Elts, ScalarElts,
- C.isShadowReturn(),
- C.getCallingConv()))
- PassInMixedRegisters(Ty, Elts, ScalarElts);
- else {
- C.HandleByValArgument(Ty, type);
- if (Attributes) {
- *Attributes |= Attribute::ByVal;
- *Attributes |=
- Attribute::constructAlignmentFromInt(LLVM_BYVAL_ALIGNMENT(type));
- }
- }
- } else if (LLVM_SHOULD_PASS_AGGREGATE_USING_BYVAL_ATTR(type, Ty)) {
- C.HandleByValArgument(Ty, type);
- if (Attributes) {
- *Attributes |= Attribute::ByVal;
- *Attributes |=
- Attribute::constructAlignmentFromInt(LLVM_BYVAL_ALIGNMENT(type));
- }
- } else if (LLVM_SHOULD_PASS_AGGREGATE_IN_INTEGER_REGS(type, &Size,
- &DontCheckAlignment)) {
- PassInIntegerRegisters(type, ScalarElts, Size, DontCheckAlignment);
- } else if (isZeroSizedStructOrUnion(type)) {
- // Zero sized struct or union, just drop it!
- ;
- } else if (TREE_CODE(type) == RECORD_TYPE) {
- for (tree Field = TYPE_FIELDS(type); Field; Field = TREE_CHAIN(Field))
- if (TREE_CODE(Field) == FIELD_DECL) {
- const tree Ftype = TREE_TYPE(Field);
- const Type *FTy = ConvertType(Ftype);
- unsigned FNo = GetFieldIndex(Field, Ty);
- assert(FNo < INT_MAX && "Case not handled yet!");
-
- // Currently, a bvyal type inside a non-byval struct is a zero-length
- // object inside a bigger object on x86-64. This type should be
- // skipped (but only when it is inside a bigger object).
- // (We know there currently are no other such cases active because
- // they would hit the assert in FunctionPrologArgumentConversion::
- // HandleByValArgument.)
- if (!LLVM_SHOULD_PASS_AGGREGATE_USING_BYVAL_ATTR(Ftype, FTy)) {
- C.EnterField(FNo, Ty);
- HandleArgument(TREE_TYPE(Field), ScalarElts);
- C.ExitField();
- }
- }
- } else if (TREE_CODE(type) == COMPLEX_TYPE) {
- C.EnterField(0, Ty);
- HandleArgument(TREE_TYPE(type), ScalarElts);
- C.ExitField();
- C.EnterField(1, Ty);
- HandleArgument(TREE_TYPE(type), ScalarElts);
- C.ExitField();
- } else if ((TREE_CODE(type) == UNION_TYPE) ||
- (TREE_CODE(type) == QUAL_UNION_TYPE)) {
- HandleUnion(type, ScalarElts);
- } else if (TREE_CODE(type) == ARRAY_TYPE) {
- // Array with padding?
- if (Ty->isStructTy())
- Ty = cast<StructType>(Ty)->getTypeAtIndex(0U);
- const ArrayType *ATy = cast<ArrayType>(Ty);
- for (unsigned i = 0, e = ATy->getNumElements(); i != e; ++i) {
- C.EnterField(i, Ty);
- HandleArgument(TREE_TYPE(type), ScalarElts);
- C.ExitField();
- }
- } else {
- assert(0 && "unknown aggregate type!");
- abort();
- }
-}
-
-/// HandleUnion - Handle a UNION_TYPE or QUAL_UNION_TYPE tree.
-void DefaultABI::HandleUnion(tree type, std::vector<const Type*> &ScalarElts) {
- if (TYPE_TRANSPARENT_AGGR(type)) {
- tree Field = TYPE_FIELDS(type);
- assert(Field && "Transparent union must have some elements!");
- while (TREE_CODE(Field) != FIELD_DECL) {
- Field = TREE_CHAIN(Field);
- assert(Field && "Transparent union must have some elements!");
- }
-
- HandleArgument(TREE_TYPE(Field), ScalarElts);
- } else {
- // Unions pass the largest element.
- unsigned MaxSize = 0;
- tree MaxElt = 0;
- for (tree Field = TYPE_FIELDS(type); Field; Field = TREE_CHAIN(Field)) {
- if (TREE_CODE(Field) == FIELD_DECL) {
- // Skip fields that are known not to be present.
- if (TREE_CODE(type) == QUAL_UNION_TYPE &&
- integer_zerop(DECL_QUALIFIER(Field)))
- continue;
-
- tree SizeTree = TYPE_SIZE(TREE_TYPE(Field));
- unsigned Size = ((unsigned)TREE_INT_CST_LOW(SizeTree)+7)/8;
- if (Size > MaxSize) {
- MaxSize = Size;
- MaxElt = Field;
- }
-
- // Skip remaining fields if this one is known to be present.
- if (TREE_CODE(type) == QUAL_UNION_TYPE &&
- integer_onep(DECL_QUALIFIER(Field)))
- break;
- }
- }
-
- if (MaxElt)
- HandleArgument(TREE_TYPE(MaxElt), ScalarElts);
- }
-}
-
-/// PassInIntegerRegisters - Given an aggregate value that should be passed in
-/// integer registers, convert it to a structure containing ints and pass all
-/// of the struct elements in. If Size is set we pass only that many bytes.
-void DefaultABI::PassInIntegerRegisters(tree type,
- std::vector<const Type*> &ScalarElts,
- unsigned origSize,
- bool DontCheckAlignment) {
- unsigned Size;
- if (origSize)
- Size = origSize;
- else
- Size = TREE_INT_CST_LOW(TYPE_SIZE(type))/8;
-
- // FIXME: We should preserve all aggregate value alignment information.
- // Work around to preserve some aggregate value alignment information:
- // don't bitcast aggregate value to Int64 if its alignment is different
- // from Int64 alignment. ARM backend needs this.
- unsigned Align = TYPE_ALIGN(type)/8;
- unsigned Int64Align =
- getTargetData().getABITypeAlignment(Type::getInt64Ty(getGlobalContext()));
- bool UseInt64 = (DontCheckAlignment || Align >= Int64Align);
-
- unsigned ElementSize = UseInt64 ? 8:4;
- unsigned ArraySize = Size / ElementSize;
-
- // Put as much of the aggregate as possible into an array.
- const Type *ATy = NULL;
- const Type *ArrayElementType = NULL;
- if (ArraySize) {
- Size = Size % ElementSize;
- ArrayElementType = (UseInt64 ?
- Type::getInt64Ty(getGlobalContext()) :
- Type::getInt32Ty(getGlobalContext()));
- ATy = ArrayType::get(ArrayElementType, ArraySize);
- }
-
- // Pass any leftover bytes as a separate element following the array.
- unsigned LastEltRealSize = 0;
- const llvm::Type *LastEltTy = 0;
- if (Size > 4) {
- LastEltTy = Type::getInt64Ty(getGlobalContext());
- } else if (Size > 2) {
- LastEltTy = Type::getInt32Ty(getGlobalContext());
- } else if (Size > 1) {
- LastEltTy = Type::getInt16Ty(getGlobalContext());
- } else if (Size > 0) {
- LastEltTy = Type::getInt8Ty(getGlobalContext());
- }
- if (LastEltTy) {
- if (Size != getTargetData().getTypeAllocSize(LastEltTy))
- LastEltRealSize = Size;
- }
-
- std::vector<const Type*> Elts;
- if (ATy)
- Elts.push_back(ATy);
- if (LastEltTy)
- Elts.push_back(LastEltTy);
- const StructType *STy = StructType::get(getGlobalContext(), Elts, false);
-
- unsigned i = 0;
- if (ArraySize) {
- C.EnterField(0, STy);
- for (unsigned j = 0; j < ArraySize; ++j) {
- C.EnterField(j, ATy);
- C.HandleScalarArgument(ArrayElementType, 0);
- ScalarElts.push_back(ArrayElementType);
- C.ExitField();
- }
- C.ExitField();
- ++i;
- }
- if (LastEltTy) {
- C.EnterField(i, STy);
- C.HandleScalarArgument(LastEltTy, 0, LastEltRealSize);
- ScalarElts.push_back(LastEltTy);
- C.ExitField();
- }
-}
-
-/// PassInMixedRegisters - Given an aggregate value that should be passed in
-/// mixed integer, floating point, and vector registers, convert it to a
-/// structure containing the specified struct elements in.
-void DefaultABI::PassInMixedRegisters(const Type *Ty,
- std::vector<const Type*> &OrigElts,
- std::vector<const Type*> &ScalarElts) {
- // We use VoidTy in OrigElts to mean "this is a word in the aggregate
- // that occupies storage but has no useful information, and is not passed
- // anywhere". Happens on x86-64.
- std::vector<const Type*> Elts(OrigElts);
- const Type* wordType = getTargetData().getPointerSize() == 4 ?
- Type::getInt32Ty(getGlobalContext()) : Type::getInt64Ty(getGlobalContext());
- for (unsigned i=0, e=Elts.size(); i!=e; ++i)
- if (OrigElts[i]->isVoidTy())
- Elts[i] = wordType;
-
- const StructType *STy = StructType::get(getGlobalContext(), Elts, false);
-
- unsigned Size = getTargetData().getTypeAllocSize(STy);
- const StructType *InSTy = dyn_cast<StructType>(Ty);
- unsigned InSize = 0;
- // If Ty and STy size does not match then last element is accessing
- // extra bits.
- unsigned LastEltSizeDiff = 0;
- if (InSTy) {
- InSize = getTargetData().getTypeAllocSize(InSTy);
- if (InSize < Size) {
- unsigned N = STy->getNumElements();
- const llvm::Type *LastEltTy = STy->getElementType(N-1);
- if (LastEltTy->isIntegerTy())
- LastEltSizeDiff =
- getTargetData().getTypeAllocSize(LastEltTy) - (Size - InSize);
- }
- }
- for (unsigned i = 0, e = Elts.size(); i != e; ++i) {
- if (!OrigElts[i]->isVoidTy()) {
- C.EnterField(i, STy);
- unsigned RealSize = 0;
- if (LastEltSizeDiff && i == (e - 1))
- RealSize = LastEltSizeDiff;
- C.HandleScalarArgument(Elts[i], 0, RealSize);
- ScalarElts.push_back(Elts[i]);
- C.ExitField();
- }
- }
-}
Removed: dragonegg/trunk/llvm-abi.h
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/llvm-abi.h?rev=127407&view=auto
==============================================================================
--- dragonegg/trunk/llvm-abi.h (original)
+++ dragonegg/trunk/llvm-abi.h (removed)
@@ -1,351 +0,0 @@
-//===------ llvm-abi.h - Processor ABI customization hooks ------*- C++ -*-===//
-//
-// Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 Chris Lattner,
-// Duncan Sands et al.
-//
-// This file is part of DragonEgg.
-//
-// DragonEgg is free software; you can redistribute it and/or modify it under
-// the terms of the GNU General Public License as published by the Free Software
-// Foundation; either version 2, or (at your option) any later version.
-//
-// DragonEgg is distributed in the hope that it will be useful, but WITHOUT ANY
-// WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
-// A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-//
-// You should have received a copy of the GNU General Public License along with
-// DragonEgg; see the file COPYING. If not, write to the Free Software
-// Foundation, 51 Franklin Street, Suite 500, Boston, MA 02110-1335, USA.
-//
-//===----------------------------------------------------------------------===//
-// This file specifies how argument values are passed and returned from function
-// calls. This allows the target to specialize handling of things like how
-// structures are passed by-value.
-//===----------------------------------------------------------------------===//
-
-#ifndef LLVM_ABI_H
-#define LLVM_ABI_H
-
-// Plugin headers
-#include "llvm-internal.h"
-#include "llvm-target.h"
-
-// LLVM headers
-#include "llvm/LLVMContext.h"
-#include "llvm/Target/TargetData.h"
-
-namespace llvm {
- class BasicBlock;
-}
-
-/// DefaultABIClient - This is a simple implementation of the ABI client
-/// interface that can be subclassed.
-struct DefaultABIClient {
- virtual ~DefaultABIClient() {}
- virtual CallingConv::ID& getCallingConv(void) = 0;
- virtual bool isShadowReturn() const { return false; }
-
- /// HandleScalarResult - This callback is invoked if the function returns a
- /// simple scalar result value, which is of type RetTy.
- virtual void HandleScalarResult(const Type * /*RetTy*/) {}
-
- /// HandleAggregateResultAsScalar - This callback is invoked if the function
- /// returns an aggregate value by bit converting it to the specified scalar
- /// type and returning that. The bit conversion should start at byte Offset
- /// within the struct, and ScalarTy is not necessarily big enough to cover
- /// the entire struct.
- virtual void HandleAggregateResultAsScalar(const Type * /*ScalarTy*/,
- unsigned /*Offset*/ = 0) {}
-
- /// HandleAggregateResultAsAggregate - This callback is invoked if the function
- /// returns an aggregate value using multiple return values.
- virtual void HandleAggregateResultAsAggregate(const Type * /*AggrTy*/) {}
-
- /// HandleAggregateShadowResult - This callback is invoked if the function
- /// returns an aggregate value by using a "shadow" first parameter, which is
- /// a pointer to the aggregate, of type PtrArgTy. If RetPtr is set to true,
- /// the pointer argument itself is returned from the function.
- virtual void HandleAggregateShadowResult(const PointerType * /*PtrArgTy*/,
- bool /*RetPtr*/) {}
-
- /// HandleScalarShadowResult - This callback is invoked if the function
- /// returns a scalar value by using a "shadow" first parameter, which is a
- /// pointer to the scalar, of type PtrArgTy. If RetPtr is set to true,
- /// the pointer argument itself is returned from the function.
- virtual void HandleScalarShadowResult(const PointerType * /*PtrArgTy*/,
- bool /*RetPtr*/) {}
-
-
- /// HandleScalarArgument - This is the primary callback that specifies an
- /// LLVM argument to pass. It is only used for first class types.
- /// If RealSize is non Zero then it specifies number of bytes to access
- /// from LLVMTy.
- virtual void HandleScalarArgument(const llvm::Type * /*LLVMTy*/,
- tree_node * /*type*/,
- unsigned /*RealSize*/ = 0) {}
-
- /// HandleByInvisibleReferenceArgument - This callback is invoked if a pointer
- /// (of type PtrTy) to the argument is passed rather than the argument itself.
- virtual void HandleByInvisibleReferenceArgument(const llvm::Type * /*PtrTy*/,
- tree_node * /*type*/) {}
-
- /// HandleByValArgument - This callback is invoked if the aggregate function
- /// argument is passed by value.
- virtual void HandleByValArgument(const llvm::Type * /*LLVMTy*/,
- tree_node * /*type*/) {}
-
- /// HandleFCAArgument - This callback is invoked if the aggregate function
- /// argument is passed by value as a first class aggregate.
- virtual void HandleFCAArgument(const llvm::Type * /*LLVMTy*/,
- tree_node * /*type*/) {}
-
- /// EnterField - Called when we're about the enter the field of a struct
- /// or union. FieldNo is the number of the element we are entering in the
- /// LLVM Struct, StructTy is the LLVM type of the struct we are entering.
- virtual void EnterField(unsigned /*FieldNo*/,
- const llvm::Type * /*StructTy*/) {}
- virtual void ExitField() {}
- virtual void HandlePad(const llvm::Type * /*LLVMTy*/) {}
-};
-
-// LLVM_SHOULD_NOT_RETURN_COMPLEX_IN_MEMORY - A hook to allow
-// special _Complex handling. Return true if X should be returned using
-// multiple value return instruction.
-#ifndef LLVM_SHOULD_NOT_RETURN_COMPLEX_IN_MEMORY
-#define LLVM_SHOULD_NOT_RETURN_COMPLEX_IN_MEMORY(X) \
- false
-#endif
-
-// LLVM_SHOULD_NOT_USE_SHADOW_RETURN - A hook to allow aggregates to be
-// returned in registers.
-#ifndef LLVM_SHOULD_NOT_USE_SHADOW_RETURN
-#define LLVM_SHOULD_NOT_USE_SHADOW_RETURN(X, CC) \
- false
-#endif
-
-// doNotUseShadowReturn - Return true if the specified GCC type
-// should not be returned using a pointer to struct parameter.
-extern bool doNotUseShadowReturn(tree_node *type, tree_node *fndecl,
- CallingConv::ID CC);
-
-/// isSingleElementStructOrArray - If this is (recursively) a structure with one
-/// field or an array with one element, return the field type, otherwise return
-/// null. Returns null for complex number types. If ignoreZeroLength, the
-/// struct (recursively) may include zero-length fields in addition to the
-/// single element that has data. If rejectFatBitField, and the single element
-/// is a bitfield of a type that's bigger than the struct, return null anyway.
-extern tree_node *isSingleElementStructOrArray(tree_node *type,
- bool ignoreZeroLength,
- bool rejectFatBitfield);
-
-/// isZeroSizedStructOrUnion - Returns true if this is a struct or union
-/// which is zero bits wide.
-extern bool isZeroSizedStructOrUnion(tree_node *type);
-
-// getLLVMScalarTypeForStructReturn - Return LLVM Type if TY can be
-// returned as a scalar, otherwise return NULL. This is the default
-// target independent implementation.
-static inline
-const Type* getLLVMScalarTypeForStructReturn(tree_node *type, unsigned *Offset) {
- const Type *Ty = ConvertType(type);
- unsigned Size = getTargetData().getTypeAllocSize(Ty);
- *Offset = 0;
- if (Size == 0)
- return Type::getVoidTy(getGlobalContext());
- else if (Size == 1)
- return Type::getInt8Ty(getGlobalContext());
- else if (Size == 2)
- return Type::getInt16Ty(getGlobalContext());
- else if (Size <= 4)
- return Type::getInt32Ty(getGlobalContext());
- else if (Size <= 8)
- return Type::getInt64Ty(getGlobalContext());
- else if (Size <= 16)
- return IntegerType::get(getGlobalContext(), 128);
- else if (Size <= 32)
- return IntegerType::get(getGlobalContext(), 256);
-
- return NULL;
-}
-
-// getLLVMAggregateTypeForStructReturn - Return LLVM type if TY can be
-// returns as multiple values, otherwise return NULL. This is the default
-// target independent implementation.
-static inline
-const Type* getLLVMAggregateTypeForStructReturn(tree_node * /*type*/) {
- return NULL;
-}
-
-#ifndef LLVM_TRY_PASS_AGGREGATE_CUSTOM
-#define LLVM_TRY_PASS_AGGREGATE_CUSTOM(T, E, CC, C) \
- false
-#endif
-
-// LLVM_SHOULD_PASS_VECTOR_IN_INTEGER_REGS - Return true if this vector
-// type should be passed as integer registers. Generally vectors which are
-// not part of the target architecture should do this.
-#ifndef LLVM_SHOULD_PASS_VECTOR_IN_INTEGER_REGS
-#define LLVM_SHOULD_PASS_VECTOR_IN_INTEGER_REGS(TY) \
- false
-#endif
-
-// LLVM_SHOULD_PASS_VECTOR_USING_BYVAL_ATTR - Return true if this vector
-// type should be passed byval. Used for generic vectors on x86-64.
-#ifndef LLVM_SHOULD_PASS_VECTOR_USING_BYVAL_ATTR
-#define LLVM_SHOULD_PASS_VECTOR_USING_BYVAL_ATTR(X) \
- false
-#endif
-
-// LLVM_SHOULD_PASS_AGGREGATE_USING_BYVAL_ATTR - Return true if this aggregate
-// value should be passed by value, i.e. passing its address with the byval
-// attribute bit set. The default is false.
-#ifndef LLVM_SHOULD_PASS_AGGREGATE_USING_BYVAL_ATTR
-#define LLVM_SHOULD_PASS_AGGREGATE_USING_BYVAL_ATTR(X, TY) \
- false
-#endif
-
-// LLVM_SHOULD_PASS_AGGREGATE_AS_FCA - Return true if this aggregate value
-// should be passed by value as a first class aggregate. The default is false.
-#ifndef LLVM_SHOULD_PASS_AGGREGATE_AS_FCA
-#define LLVM_SHOULD_PASS_AGGREGATE_AS_FCA(X, TY) \
- false
-#endif
-
-// LLVM_SHOULD_PASS_AGGREGATE_IN_MIXED_REGS - Return true if this aggregate
-// value should be passed in a mixture of integer, floating point, and vector
-// registers. The routine should also return by reference a vector of the
-// types of the registers being used. The default is false.
-#ifndef LLVM_SHOULD_PASS_AGGREGATE_IN_MIXED_REGS
-#define LLVM_SHOULD_PASS_AGGREGATE_IN_MIXED_REGS(T, TY, CC, E) \
- false
-#endif
-
-// LLVM_AGGREGATE_PARTIALLY_PASSED_IN_REGS - Only called if
-// LLVM_SHOULD_PASS_AGGREGATE_IN_MIXED_REGS returns true. This returns true if
-// there are only enough unused argument passing registers to pass a part of
-// the aggregate. Note, this routine should return false if none of the needed
-// registers are available.
-#ifndef LLVM_AGGREGATE_PARTIALLY_PASSED_IN_REGS
-#define LLVM_AGGREGATE_PARTIALLY_PASSED_IN_REGS(E, SE, ISR, CC) \
- false
-#endif
-
-// LLVM_BYVAL_ALIGNMENT - Returns the alignment of the type in bytes, if known,
-// in the getGlobalContext() of its use as a function parameter.
-// Note that the alignment in the TYPE node is usually the alignment appropriate
-// when the type is used within a struct, which may or may not be appropriate
-// here.
-#ifndef LLVM_BYVAL_ALIGNMENT
-#define LLVM_BYVAL_ALIGNMENT(T) 0
-#endif
-
-// LLVM_SHOULD_PASS_AGGREGATE_IN_INTEGER_REGS - Return true if this aggregate
-// value should be passed in integer registers. By default, we do this for all
-// values that are not single-element structs. This ensures that things like
-// {short,short} are passed in one 32-bit chunk, not as two arguments (which
-// would often be 64-bits). We also do it for single-element structs when the
-// single element is a bitfield of a type bigger than the struct; the code
-// for field-by-field struct passing does not handle this one right.
-#ifndef LLVM_SHOULD_PASS_AGGREGATE_IN_INTEGER_REGS
-#define LLVM_SHOULD_PASS_AGGREGATE_IN_INTEGER_REGS(X, Y, Z) \
- !isSingleElementStructOrArray((X), false, true)
-#endif
-
-// LLVM_SHOULD_RETURN_SELT_STRUCT_AS_SCALAR - Return a TYPE tree if this single
-// element struct should be returned using the convention for that scalar TYPE,
-// 0 otherwise.
-// The returned TYPE must be the same size as X for this to work; that is
-// checked elsewhere. (Structs where this is not the case can be constructed
-// by abusing the __aligned__ attribute.)
-#ifndef LLVM_SHOULD_RETURN_SELT_STRUCT_AS_SCALAR
-#define LLVM_SHOULD_RETURN_SELT_STRUCT_AS_SCALAR(X) \
- isSingleElementStructOrArray(X, false, false)
-#endif
-
-// LLVM_SHOULD_RETURN_VECTOR_AS_SCALAR - Return a TYPE tree if this vector type
-// should be returned using the convention for that scalar TYPE, 0 otherwise.
-// X may be evaluated more than once.
-#ifndef LLVM_SHOULD_RETURN_VECTOR_AS_SCALAR
-#define LLVM_SHOULD_RETURN_VECTOR_AS_SCALAR(X,Y) 0
-#endif
-
-// LLVM_SHOULD_RETURN_VECTOR_AS_SHADOW - Return true if this vector type
-// should be returned using the aggregate shadow (sret) convention, 0 otherwise.
-// X may be evaluated more than once.
-#ifndef LLVM_SHOULD_RETURN_VECTOR_AS_SHADOW
-#define LLVM_SHOULD_RETURN_VECTOR_AS_SHADOW(X,Y) 0
-#endif
-
-// LLVM_SCALAR_TYPE_FOR_STRUCT_RETURN - Return LLVM Type if X can be
-// returned as a scalar, otherwise return NULL.
-#ifndef LLVM_SCALAR_TYPE_FOR_STRUCT_RETURN
-#define LLVM_SCALAR_TYPE_FOR_STRUCT_RETURN(X, Y) \
- getLLVMScalarTypeForStructReturn((X), (Y))
-#endif
-
-// LLVM_AGGR_TYPE_FOR_STRUCT_RETURN - Return LLVM Type if X can be
-// returned as an aggregate, otherwise return NULL.
-#ifndef LLVM_AGGR_TYPE_FOR_STRUCT_RETURN
-#define LLVM_AGGR_TYPE_FOR_STRUCT_RETURN(X, CC) \
- getLLVMAggregateTypeForStructReturn(X)
-#endif
-
-// LLVM_EXTRACT_MULTIPLE_RETURN_VALUE - Extract multiple return value from
-// SRC and assign it to DEST. Each target that supports multiple return
-// value must implement this hook.
-#ifndef LLVM_EXTRACT_MULTIPLE_RETURN_VALUE
-#define LLVM_EXTRACT_MULTIPLE_RETURN_VALUE(Src,Dest,V,B) \
- llvm_default_extract_multiple_return_value((Src),(Dest),(V),(B))
-#endif
-static inline
-void llvm_default_extract_multiple_return_value(Value * /*Src*/, Value * /*Dest*/,
- bool /*isVolatile*/,
- LLVMBuilder &/*Builder*/) {
- assert (0 && "LLVM_EXTRACT_MULTIPLE_RETURN_VALUE is not implemented!");
-}
-
-/// DefaultABI - This class implements the default LLVM ABI where structures are
-/// passed by decimating them into individual components and unions are passed
-/// by passing the largest member of the union.
-///
-class DefaultABI {
-protected:
- DefaultABIClient &C;
-public:
- DefaultABI(DefaultABIClient &c);
-
- bool isShadowReturn() const;
-
- /// HandleReturnType - This is invoked by the target-independent code for the
- /// return type. It potentially breaks down the argument and invokes methods
- /// on the client that indicate how its pieces should be handled. This
- /// handles things like returning structures via hidden parameters.
- void HandleReturnType(tree_node *type, tree_node *fn, bool isBuiltin);
-
- /// HandleArgument - This is invoked by the target-independent code for each
- /// argument type passed into the function. It potentially breaks down the
- /// argument and invokes methods on the client that indicate how its pieces
- /// should be handled. This handles things like decimating structures into
- /// their fields.
- void HandleArgument(tree_node *type, std::vector<const Type*> &ScalarElts,
- Attributes *Attributes = NULL);
-
- /// HandleUnion - Handle a UNION_TYPE or QUAL_UNION_TYPE tree.
- ///
- void HandleUnion(tree_node *type, std::vector<const Type*> &ScalarElts);
-
- /// PassInIntegerRegisters - Given an aggregate value that should be passed in
- /// integer registers, convert it to a structure containing ints and pass all
- /// of the struct elements in. If Size is set we pass only that many bytes.
- void PassInIntegerRegisters(tree_node *type,
- std::vector<const Type*> &ScalarElts,
- unsigned origSize, bool DontCheckAlignment);
-
- /// PassInMixedRegisters - Given an aggregate value that should be passed in
- /// mixed integer, floating point, and vector registers, convert it to a
- /// structure containing the specified struct elements in.
- void PassInMixedRegisters(const Type *Ty, std::vector<const Type*> &OrigElts,
- std::vector<const Type*> &ScalarElts);
-};
-
-#endif /* LLVM_ABI_H */
Removed: dragonegg/trunk/llvm-backend.cpp
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/llvm-backend.cpp?rev=127407&view=auto
==============================================================================
--- dragonegg/trunk/llvm-backend.cpp (original)
+++ dragonegg/trunk/llvm-backend.cpp (removed)
@@ -1,2758 +0,0 @@
-//===-------- llvm-backend.cpp - High-level LLVM backend interface --------===//
-//
-// Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 Chris Lattner,
-// Duncan Sands et al.
-//
-// This file is part of DragonEgg.
-//
-// DragonEgg is free software; you can redistribute it and/or modify it under
-// the terms of the GNU General Public License as published by the Free Software
-// Foundation; either version 2, or (at your option) any later version.
-//
-// DragonEgg is distributed in the hope that it will be useful, but WITHOUT ANY
-// WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
-// A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-//
-// You should have received a copy of the GNU General Public License along with
-// DragonEgg; see the file COPYING. If not, write to the Free Software
-// Foundation, 51 Franklin Street, Suite 500, Boston, MA 02110-1335, USA.
-//
-//===----------------------------------------------------------------------===//
-// This file defines the high-level LLVM backend interface.
-//===----------------------------------------------------------------------===//
-
-// Plugin headers
-extern "C" {
-#include "llvm-cache.h"
-}
-#include "llvm-constant.h"
-#include "llvm-debug.h"
-#include "llvm-os.h"
-#include "llvm-target.h"
-
-// LLVM headers
-#define DEBUG_TYPE "plugin"
-#include "llvm/LLVMContext.h"
-#include "llvm/Module.h"
-#include "llvm/ADT/StringExtras.h"
-#include "llvm/Assembly/PrintModulePass.h"
-#include "llvm/Bitcode/ReaderWriter.h"
-#include "llvm/CodeGen/RegAllocRegistry.h"
-#include "llvm/Support/ErrorHandling.h"
-#include "llvm/Support/FormattedStream.h"
-#include "llvm/Support/ManagedStatic.h"
-#include "llvm/Support/StandardPasses.h"
-#include "llvm/Target/SubtargetFeature.h"
-#include "llvm/Target/TargetData.h"
-#include "llvm/Target/TargetLibraryInfo.h"
-#include "llvm/Target/TargetRegistry.h"
-
-// System headers
-#include <gmp.h>
-
-// GCC headers
-extern "C" {
-#include "config.h"
-// Stop GCC declaring 'getopt' as it can clash with the system's declaration.
-#undef HAVE_DECL_GETOPT
-#include "system.h"
-#include "coretypes.h"
-#include "target.h"
-#include "tree.h"
-
-#include "debug.h"
-#include "diagnostic.h"
-#include "flags.h"
-#include "gcc-plugin.h"
-#include "intl.h"
-#include "langhooks.h"
-#include "output.h"
-#include "params.h"
-#include "plugin-version.h"
-#include "toplev.h"
-#include "tree-flow.h"
-#include "tree-pass.h"
-#include "version.h"
-}
-
-#if (GCC_MAJOR != 4)
-#error Unsupported GCC major version
-#endif
-
-// Non-zero if bytecode from PCH is successfully read.
-int flag_llvm_pch_read;
-
-// Non-zero if libcalls should not be simplified.
-int flag_no_simplify_libcalls;
-
-// Non-zero if red-zone is disabled.
-//TODOstatic int flag_disable_red_zone = 0;
-
-// Non-zero if implicit floating point instructions are disabled.
-//TODOstatic int flag_no_implicit_float = 0;
-
-/// llvm_asm_file_name - Name of file to use for assembly code output.
-static const char *llvm_asm_file_name;
-
-// Global state for the LLVM backend.
-Module *TheModule = 0;
-DebugInfo *TheDebugInfo = 0;
-TargetMachine *TheTarget = 0;
-TargetFolder *TheFolder = 0;
-TypeConverter *TheTypeConverter = 0;
-raw_ostream *OutStream = 0; // Stream to write assembly code to.
-formatted_raw_ostream FormattedOutStream;
-
-static bool DebugPassArguments;
-static bool DebugPassStructure;
-static bool DisableLLVMOptimizations;
-static bool EnableGCCOptimizations;
-static bool EmitIR;
-static bool SaveGCCOutput;
-
-std::vector<std::pair<Constant*, int> > StaticCtors, StaticDtors;
-SmallSetVector<Constant*, 32> AttributeUsedGlobals;
-SmallSetVector<Constant*, 32> AttributeCompilerUsedGlobals;
-std::vector<Constant*> AttributeAnnotateGlobals;
-
-/// PerFunctionPasses - This is the list of cleanup passes run per-function
-/// as each is compiled. In cases where we are not doing IPO, it includes the
-/// code generator.
-static FunctionPassManager *PerFunctionPasses = 0;
-static PassManager *PerModulePasses = 0;
-static FunctionPassManager *CodeGenPasses = 0;
-
-static void createPerFunctionOptimizationPasses();
-static void createPerModuleOptimizationPasses();
-//TODOstatic void destroyOptimizationPasses();
-
-
-//===----------------------------------------------------------------------===//
-// Matching LLVM Values with GCC DECL trees
-//===----------------------------------------------------------------------===//
-
-/// set_decl_llvm - Remember the LLVM value for a GCC declaration.
-Value *set_decl_llvm (tree t, Value *V) {
- assert(HAS_RTL_P(t) && "Expected a declaration with RTL!");
- return (Value *)llvm_set_cached(t, V);
-}
-
-/// get_decl_llvm - Retrieve the LLVM value for a GCC declaration, or NULL.
-Value *get_decl_llvm(tree t) {
- assert(HAS_RTL_P(t) && "Expected a declaration with RTL!");
- return (Value *)llvm_get_cached(t);
-}
-
-/// set_decl_index - Associate a non-negative number with the given GCC
-/// declaration.
-int set_decl_index(tree t, int i) {
- assert(!HAS_RTL_P(t) && "Expected a declaration without RTL!");
- assert(i >= 0 && "Negative indices not allowed!");
- // In order to use zero as a special value (see get_decl_index) map the range
- // 0 .. INT_MAX to -1 .. INT_MIN.
- llvm_set_cached(t, (void *)(intptr_t)(-i - 1));
- return i;
-}
-
-/// get_decl_index - Get the non-negative number associated with the given GCC
-/// declaration. Returns a negative value if no such association has been made.
-int get_decl_index(tree t) {
- assert(!HAS_RTL_P(t) && "Expected a declaration without RTL!");
- // Map the range -1 .. INT_MIN back to 0 .. INT_MAX (see set_decl_index) and
- // send 0 (aka void) to -1.
- return -(1 + (int)(intptr_t)llvm_get_cached(t));
-}
-
-/// changeLLVMConstant - Replace Old with New everywhere, updating all maps
-/// (except for AttributeAnnotateGlobals, which is a different kind of animal).
-/// At this point we know that New is not in any of these maps.
-void changeLLVMConstant(Constant *Old, Constant *New) {
- assert(Old->use_empty() && "Old value has uses!");
-
- if (AttributeUsedGlobals.count(Old)) {
- AttributeUsedGlobals.remove(Old);
- AttributeUsedGlobals.insert(New);
- }
-
- if (AttributeCompilerUsedGlobals.count(Old)) {
- AttributeCompilerUsedGlobals.remove(Old);
- AttributeCompilerUsedGlobals.insert(New);
- }
-
- for (unsigned i = 0, e = StaticCtors.size(); i != e; ++i) {
- if (StaticCtors[i].first == Old)
- StaticCtors[i].first = New;
- }
-
- for (unsigned i = 0, e = StaticDtors.size(); i != e; ++i) {
- if (StaticDtors[i].first == Old)
- StaticDtors[i].first = New;
- }
-
- llvm_replace_cached(Old, New);
-}
-
-//TODO/// readLLVMValues - Read LLVM Types string table
-//TODOvoid readLLVMValues() {
-//TODO GlobalValue *V = TheModule->getNamedGlobal("llvm.pch.values");
-//TODO if (!V)
-//TODO return;
-//TODO
-//TODO GlobalVariable *GV = cast<GlobalVariable>(V);
-//TODO ConstantStruct *ValuesFromPCH = cast<ConstantStruct>(GV->getOperand(0));
-//TODO
-//TODO for (unsigned i = 0; i < ValuesFromPCH->getNumOperands(); ++i) {
-//TODO Value *Va = ValuesFromPCH->getOperand(i);
-//TODO
-//TODO if (!Va) {
-//TODO // If V is empty then insert NULL to represent empty entries.
-//TODO LLVMValues.push_back(Va);
-//TODO continue;
-//TODO }
-//TODO if (ConstantArray *CA = dyn_cast<ConstantArray>(Va)) {
-//TODO std::string Str = CA->getAsString();
-//TODO Va = TheModule->getValueSymbolTable().lookup(Str);
-//TODO }
-//TODO assert (Va != NULL && "Invalid Value in LLVMValues string table");
-//TODO LLVMValues.push_back(Va);
-//TODO }
-//TODO
-//TODO // Now, llvm.pch.values is not required so remove it from the symbol table.
-//TODO GV->eraseFromParent();
-//TODO}
-//TODO
-//TODO/// writeLLVMValues - GCC tree's uses LLVMValues vector's index to reach LLVM
-//TODO/// Values. Create a string table to hold these LLVM Values' names. This string
-//TODO/// table will be used to recreate LTypes vector after loading PCH.
-//TODOvoid writeLLVMValues() {
-//TODO if (LLVMValues.empty())
-//TODO return;
-//TODO
-//TODO LLVMContext &Context = getGlobalContext();
-//TODO
-//TODO std::vector<Constant *> ValuesForPCH;
-//TODO for (std::vector<Value *>::iterator I = LLVMValues.begin(),
-//TODO E = LLVMValues.end(); I != E; ++I) {
-//TODO if (Constant *C = dyn_cast_or_null<Constant>(*I))
-//TODO ValuesForPCH.push_back(C);
-//TODO else
-//TODO // Non constant values, e.g. arguments, are not at global scope.
-//TODO // When PCH is read, only global scope values are used.
-//TODO ValuesForPCH.push_back(Constant::getNullValue(Type::getInt32Ty(Context)));
-//TODO }
-//TODO
-//TODO // Create string table.
-//TODO Constant *LLVMValuesTable = ConstantStruct::get(Context, ValuesForPCH, false);
-//TODO
-//TODO // Create variable to hold this string table.
-//TODO new GlobalVariable(*TheModule, LLVMValuesTable->getType(), true,
-//TODO GlobalValue::ExternalLinkage,
-//TODO LLVMValuesTable,
-//TODO "llvm.pch.values");
-//TODO}
-
-/// handleVisibility - Forward decl visibility style to global.
-void handleVisibility(tree decl, GlobalValue *GV) {
- // If decl has visibility specified explicitely (via attribute) - honour
- // it. Otherwise (e.g. visibility specified via -fvisibility=hidden) honour
- // only if symbol is local.
- if (TREE_PUBLIC(decl) &&
- (DECL_VISIBILITY_SPECIFIED(decl) || !DECL_EXTERNAL(decl))) {
- if (DECL_VISIBILITY(decl) == VISIBILITY_HIDDEN)
- GV->setVisibility(GlobalValue::HiddenVisibility);
- else if (DECL_VISIBILITY(decl) == VISIBILITY_PROTECTED)
- GV->setVisibility(GlobalValue::ProtectedVisibility);
- else if (DECL_VISIBILITY(decl) == VISIBILITY_DEFAULT)
- GV->setVisibility(Function::DefaultVisibility);
- }
-}
-
-// GuessAtInliningThreshold - Figure out a reasonable threshold to pass llvm's
-// inliner. gcc has many options that control inlining, but we have decided
-// not to support anything like that for llvm-gcc.
-static unsigned GuessAtInliningThreshold() {
- if (optimize_size)
- // Reduce inline limit.
- return 75;
-
- if (optimize >= 3)
- return 275;
- return 225;
-}
-
-// SizeOfGlobalMatchesDecl - Whether the size of the given global value is the
-// same as that of the given GCC declaration. Conservatively returns 'true' if
-// the answer is unclear.
-static LLVM_ATTRIBUTE_UNUSED // Only called from asserts.
-bool SizeOfGlobalMatchesDecl(GlobalValue *GV, tree decl) {
- // If the GCC declaration has no size then nothing useful can be said here.
- if (!DECL_SIZE(decl))
- return true;
- assert(isInt64(DECL_SIZE(decl), true) && "Global decl with variable size!");
-
- const Type *Ty = GV->getType()->getElementType();
- // If the LLVM type has no size then a useful comparison cannot be made.
- if (!Ty->isSized())
- return true;
-
- // DECL_SIZE need not be a multiple of the alignment, while the LLVM size
- // always is. Correct for this.
- // TODO: Change getTypeSizeInBits for aggregate types so it is no longer
- // rounded up to the alignment.
- uint64_t gcc_size = getInt64(DECL_SIZE(decl), true);
- const TargetData *TD = TheTarget->getTargetData();
- unsigned Align = 8 * TD->getABITypeAlignment(Ty);
- return TheTarget->getTargetData()->getTypeAllocSizeInBits(Ty) ==
- ((gcc_size + Align - 1) / Align) * Align;
-}
-
-#ifndef LLVM_TARGET_NAME
-#error LLVM_TARGET_NAME macro not specified
-#endif
-
-namespace llvm {
-#define Declare2(TARG, MOD) extern "C" void LLVMInitialize ## TARG ## MOD()
-#define Declare(T, M) Declare2(T, M)
- Declare(LLVM_TARGET_NAME, TargetInfo);
- Declare(LLVM_TARGET_NAME, Target);
- Declare(LLVM_TARGET_NAME, AsmPrinter);
-#undef Declare
-#undef Declare2
-}
-
-/// ConfigureLLVM - Initialized and configure LLVM.
-static void ConfigureLLVM(void) {
- // Initialize the LLVM backend.
-#define DoInit2(TARG, MOD) LLVMInitialize ## TARG ## MOD()
-#define DoInit(T, M) DoInit2(T, M)
- DoInit(LLVM_TARGET_NAME, TargetInfo);
- DoInit(LLVM_TARGET_NAME, Target);
- DoInit(LLVM_TARGET_NAME, AsmPrinter);
-#undef DoInit
-#undef DoInit2
-
- // Initialize LLVM command line options.
- std::vector<const char*> Args;
- Args.push_back(progname); // program name
-
-//TODO // Allow targets to specify PIC options and other stuff to the corresponding
-//TODO // LLVM backends.
-//TODO#ifdef LLVM_SET_RED_ZONE_FLAG
-//TODO LLVM_SET_RED_ZONE_FLAG(flag_disable_red_zone)
-//TODO#endif
-#ifdef LLVM_SET_TARGET_OPTIONS
- LLVM_SET_TARGET_OPTIONS(Args);
-#endif
-#ifdef LLVM_SET_MACHINE_OPTIONS
- LLVM_SET_MACHINE_OPTIONS(Args);
-#endif
-//TODO#ifdef LLVM_SET_IMPLICIT_FLOAT
-//TODO LLVM_SET_IMPLICIT_FLOAT(flag_no_implicit_float)
-//TODO#endif
-
- if (time_report || !quiet_flag || flag_detailed_statistics)
- Args.push_back("--time-passes");
- if (!quiet_flag || flag_detailed_statistics)
- Args.push_back("--stats");
- if (fast_math_flags_set_p())
- Args.push_back("--enable-unsafe-fp-math");
- if (flag_finite_math_only) {
- Args.push_back("--enable-no-nans-fp-math");
- Args.push_back("--enable-no-infs-fp-math");
- }
- if (!flag_omit_frame_pointer)
- Args.push_back("--disable-fp-elim");
- if (!flag_zero_initialized_in_bss)
- Args.push_back("--nozero-initialized-in-bss");
- if (flag_verbose_asm)
- Args.push_back("--asm-verbose");
- if (DebugPassStructure)
- Args.push_back("--debug-pass=Structure");
- if (DebugPassArguments)
- Args.push_back("--debug-pass=Arguments");
- if (flag_unwind_tables)
- Args.push_back("--unwind-tables");
- if (!flag_schedule_insns)
- Args.push_back("--pre-RA-sched=source");
- if (flag_function_sections)
- Args.push_back("--ffunction-sections");
- if (flag_data_sections)
- Args.push_back("--fdata-sections");
-
- // If there are options that should be passed through to the LLVM backend
- // directly from the command line, do so now. This is mainly for debugging
- // purposes, and shouldn't really be for general use.
- std::vector<std::string> ArgStrings;
-
- unsigned threshold = GuessAtInliningThreshold();
- std::string Arg("--inline-threshold="+utostr(threshold));
- ArgStrings.push_back(Arg);
-
-//TODO if (flag_limited_precision > 0) {
-//TODO std::string Arg("--limit-float-precision="+utostr(flag_limited_precision));
-//TODO ArgStrings.push_back(Arg);
-//TODO }
-
- if (flag_stack_protect > 0) {
- std::string Arg("--stack-protector-buffer-size=" +
- utostr(PARAM_VALUE(PARAM_SSP_BUFFER_SIZE)));
- ArgStrings.push_back(Arg);
- }
-
- for (unsigned i = 0, e = ArgStrings.size(); i != e; ++i)
- Args.push_back(ArgStrings[i].c_str());
-
-//TODO std::vector<std::string> LLVM_Optns; // Avoid deallocation before opts parsed!
-//TODO if (llvm_optns) {
-//TODO llvm::SmallVector<llvm::StringRef, 16> Buf;
-//TODO SplitString(llvm_optns, Buf);
-//TODO for(unsigned i = 0, e = Buf.size(); i != e; ++i) {
-//TODO LLVM_Optns.push_back(Buf[i]);
-//TODO Args.push_back(LLVM_Optns.back().c_str());
-//TODO }
-//TODO }
-
- Args.push_back(0); // Null terminator.
- int pseudo_argc = Args.size()-1;
- llvm::cl::ParseCommandLineOptions(pseudo_argc, const_cast<char**>(&Args[0]));
-}
-
-/// ComputeTargetTriple - Determine the target triple to use.
-static std::string ComputeTargetTriple() {
- // If the target wants to override the architecture, e.g. turning
- // powerpc-darwin-... into powerpc64-darwin-... when -m64 is enabled, do so
- // now.
- std::string TargetTriple = TARGET_NAME;
-#ifdef LLVM_OVERRIDE_TARGET_ARCH
- std::string Arch = LLVM_OVERRIDE_TARGET_ARCH();
- if (!Arch.empty()) {
- std::string::size_type DashPos = TargetTriple.find('-');
- if (DashPos != std::string::npos)// If we have a sane t-t, replace the arch.
- TargetTriple = Arch + TargetTriple.substr(DashPos);
- }
-#endif
-#ifdef LLVM_OVERRIDE_TARGET_VERSION
- char *NewTriple;
- bool OverRidden = LLVM_OVERRIDE_TARGET_VERSION(TargetTriple.c_str(),
- &NewTriple);
- if (OverRidden)
- TargetTriple = std::string(NewTriple);
-#endif
- return TargetTriple;
-}
-
-/// CreateTargetMachine - Create the TargetMachine we will generate code with.
-static void CreateTargetMachine(const std::string &TargetTriple) {
- // FIXME: Figure out how to select the target and pass down subtarget info.
- std::string Err;
- const Target *TME =
- TargetRegistry::lookupTarget(TargetTriple, Err);
- if (!TME)
- report_fatal_error(Err);
-
- // Figure out the subtarget feature string we pass to the target.
- std::string FeatureStr;
- // The target can set LLVM_SET_SUBTARGET_FEATURES to configure the LLVM
- // backend.
-#ifdef LLVM_SET_SUBTARGET_FEATURES
- SubtargetFeatures Features;
- LLVM_SET_SUBTARGET_FEATURES(Features);
- FeatureStr = Features.getString();
-#endif
- TheTarget = TME->createTargetMachine(TargetTriple, FeatureStr);
- assert(TheTarget->getTargetData()->isBigEndian() == BYTES_BIG_ENDIAN);
-}
-
-/// CreateModule - Create and initialize a module to output LLVM IR to.
-static void CreateModule(const std::string &TargetTriple) {
- // Create the module itself.
- StringRef ModuleID = main_input_filename ? main_input_filename : "";
- TheModule = new Module(ModuleID, getGlobalContext());
-
- // Insert a special .ident directive to identify the version of the plugin
- // which compiled this code. The format of the .ident string is patterned
- // after the ones produced by GCC.
-#ifdef IDENT_ASM_OP
- if (!flag_no_ident) {
- const char *pkg_version = "(GNU) ";
-
- if (strcmp ("(GCC) ", pkgversion_string))
- pkg_version = pkgversion_string;
-
- std::string IdentString = IDENT_ASM_OP;
- IdentString += "\"GCC: ";
- IdentString += pkg_version;
- IdentString += version_string;
- IdentString += " LLVM: ";
- IdentString += REVISION;
- IdentString += "\"";
- TheModule->setModuleInlineAsm(IdentString);
- }
-#endif
-
- // Install information about the target triple and data layout into the module
- // for optimizer use.
- TheModule->setTargetTriple(TargetTriple);
- TheModule->setDataLayout(TheTarget->getTargetData()->
- getStringRepresentation());
-}
-
-/// flag_default_initialize_globals - Whether global variables with no explicit
-/// initial value should be zero initialized.
-bool flag_default_initialize_globals = true; // GCC always initializes to zero
-
-/// flag_odr - Whether the language being compiled obeys the One Definition Rule
-/// (i.e. if the same function is defined in multiple compilation units, all the
-/// definitions are equivalent).
-bool flag_odr;
-
-/// flag_vararg_requires_arguments - Do not consider functions with no arguments
-/// to take a variable number of arguments (...). If set then a function like
-/// "T foo() {}" will be treated like "T foo(void) {}" and not "T foo(...) {}".
-bool flag_vararg_requires_arguments;
-
-/// flag_force_vararg_prototypes - Force prototypes to take a variable number of
-/// arguments (...). This is helpful if the language front-end sometimes emits
-/// calls where the call arguments do not match the callee function declaration.
-bool flag_force_vararg_prototypes;
-
-/// InstallLanguageSettings - Do any language-specific back-end configuration.
-static void InstallLanguageSettings() {
- // The principal here is that not doing any language-specific configuration
- // should still result in correct code. The language-specific settings are
- // only for obtaining better code, by exploiting language-specific features.
- StringRef LanguageName = lang_hooks.name;
-
- if (LanguageName == "GNU Ada") {
- flag_default_initialize_globals = false; // Uninitialized means what it says
- flag_odr = true; // Ada obeys the one-definition-rule
- } else if (LanguageName == "GNU C") {
- flag_vararg_requires_arguments = true; // "T foo() {}" -> "T foo(void) {}"
- } else if (LanguageName == "GNU C++") {
- flag_odr = true; // C++ obeys the one-definition-rule
- } else if (LanguageName == "GNU Fortran") {
- flag_force_vararg_prototypes = true;
- } else if (LanguageName == "GNU GIMPLE") { // LTO gold plugin
- } else if (LanguageName == "GNU Java") {
- } else if (LanguageName == "GNU Objective-C") {
- flag_vararg_requires_arguments = true; // "T foo() {}" -> "T foo(void) {}"
- } else if (LanguageName == "GNU Objective-C++") {
- flag_odr = true; // Objective C++ obeys the one-definition-rule
- }
-}
-
-/// InitializeBackend - Initialize the GCC to LLVM conversion machinery.
-/// Can safely be called multiple times.
-static void InitializeBackend(void) {
- static bool Initialized = false;
- if (Initialized)
- return;
-
- // Initialize and configure LLVM.
- ConfigureLLVM();
-
- // Create the target machine to generate code for.
- const std::string TargetTriple = ComputeTargetTriple();
- CreateTargetMachine(TargetTriple);
-
- // Create a module to hold the generated LLVM IR.
- CreateModule(TargetTriple);
-
- TheTypeConverter = new TypeConverter();
- TheFolder = new TargetFolder(TheTarget->getTargetData());
-
- if (debug_info_level > DINFO_LEVEL_NONE)
- TheDebugInfo = new DebugInfo(TheModule);
- if (TheDebugInfo)
- TheDebugInfo->Initialize();
-
- // Perform language specific configuration.
- InstallLanguageSettings();
-
- Initialized = true;
-}
-
-/// InitializeOutputStreams - Initialize the assembly code output streams.
-static void InitializeOutputStreams(bool Binary) {
- assert(!OutStream && "Output stream already initialized!");
- std::string Error;
-
- OutStream = new raw_fd_ostream(llvm_asm_file_name, Error,
- Binary ? raw_fd_ostream::F_Binary : 0);
-
- if (!Error.empty())
- report_fatal_error(Error);
-
- FormattedOutStream.setStream(*OutStream,
- formatted_raw_ostream::PRESERVE_STREAM);
-}
-
-//TODOoFILEstream *AsmIntermediateOutStream = 0;
-//TODO
-//TODO/// llvm_pch_read - Read bytecode from PCH file. Initialize TheModule and setup
-//TODO/// LTypes vector.
-//TODOvoid llvm_pch_read(const unsigned char *Buffer, unsigned Size) {
-//TODO std::string ModuleName = TheModule->getModuleIdentifier();
-//TODO
-//TODO delete TheModule;
-//TODO delete TheDebugInfo;
-//TODO
-//TODO clearTargetBuiltinCache();
-//TODO
-//TODO MemoryBuffer *MB = MemoryBuffer::getNewMemBuffer(Size, ModuleName.c_str());
-//TODO memcpy((char*)MB->getBufferStart(), Buffer, Size);
-//TODO
-//TODO std::string ErrMsg;
-//TODO TheModule = ParseBitcodeFile(MB, getGlobalContext(), &ErrMsg);
-//TODO delete MB;
-//TODO
-//TODO // FIXME - Do not disable debug info while writing pch.
-//TODO if (!flag_pch_file && debug_info_level > DINFO_LEVEL_NONE) {
-//TODO TheDebugInfo = new DebugInfo(TheModule);
-//TODO TheDebugInfo->Initialize();
-//TODO }
-//TODO
-//TODO if (!TheModule) {
-//TODO errs() << "Error reading bytecodes from PCH file\n";
-//TODO errs() << ErrMsg << "\n";
-//TODO exit(1);
-//TODO }
-//TODO
-//TODO if (PerFunctionPasses || PerModulePasses) {
-//TODO destroyOptimizationPasses();
-//TODO
-//TODO // Don't run codegen, when we should output PCH
-//TODO if (flag_pch_file)
-//TODO llvm_pch_write_init();
-//TODO }
-//TODO
-//TODO // Read LLVM Types string table
-//TODO readLLVMTypesStringTable();
-//TODO readLLVMValues();
-//TODO
-//TODO flag_llvm_pch_read = 1;
-//TODO}
-//TODO
-//TODO/// llvm_pch_write_init - Initialize PCH writing.
-//TODOvoid llvm_pch_write_init(void) {
-//TODO timevar_push(TV_LLVM_INIT);
-//TODO AsmOutStream = new oFILEstream(asm_out_file);
-//TODO // FIXME: disentangle ostream madness here. Kill off ostream and FILE.
-//TODO AsmOutRawStream =
-//TODO new formatted_raw_ostream(*new raw_os_ostream(*AsmOutStream),
-//TODO formatted_raw_ostream::DELETE_STREAM);
-//TODO
-//TODO PerModulePasses = new PassManager();
-//TODO PerModulePasses->add(new TargetData(*TheTarget->getTargetData()));
-//TODO
-//TODO // If writing to stdout, set binary mode.
-//TODO if (asm_out_file == stdout)
-//TODO sys::Program::ChangeStdoutToBinary();
-//TODO
-//TODO // Emit an LLVM .bc file to the output. This is used when passed
-//TODO // -emit-llvm -c to the GCC driver.
-//TODO PerModulePasses->add(createBitcodeWriterPass(*AsmOutStream));
-//TODO
-//TODO // Disable emission of .ident into the output file... which is completely
-//TODO // wrong for llvm/.bc emission cases.
-//TODO flag_no_ident = 1;
-//TODO
-//TODO flag_llvm_pch_read = 0;
-//TODO
-//TODO timevar_pop(TV_LLVM_INIT);
-//TODO}
-
-//TODOstatic void destroyOptimizationPasses() {
-//TODO delete PerFunctionPasses;
-//TODO delete PerModulePasses;
-//TODO delete CodeGenPasses;
-//TODO
-//TODO PerFunctionPasses = 0;
-//TODO PerModulePasses = 0;
-//TODO CodeGenPasses = 0;
-//TODO}
-
-static void createPerFunctionOptimizationPasses() {
- if (PerFunctionPasses)
- return;
-
- // Create and set up the per-function pass manager.
- // FIXME: Move the code generator to be function-at-a-time.
- PerFunctionPasses =
- new FunctionPassManager(TheModule);
- PerFunctionPasses->add(new TargetData(*TheTarget->getTargetData()));
-
- // In -O0 if checking is disabled, we don't even have per-function passes.
- bool HasPerFunctionPasses = false;
-#ifdef ENABLE_CHECKING
- PerFunctionPasses->add(createVerifierPass());
- HasPerFunctionPasses = true;
-#endif
-
- if (optimize > 0 && !DisableLLVMOptimizations) {
- HasPerFunctionPasses = true;
-
- TargetLibraryInfo *TLI =
- new TargetLibraryInfo(Triple(TheModule->getTargetTriple()));
- if (flag_no_simplify_libcalls)
- TLI->disableAllFunctions();
- PerFunctionPasses->add(TLI);
-
- PerFunctionPasses->add(createCFGSimplificationPass());
- if (optimize == 1)
- PerFunctionPasses->add(createPromoteMemoryToRegisterPass());
- else
- PerFunctionPasses->add(createScalarReplAggregatesPass());
- PerFunctionPasses->add(createInstructionCombiningPass());
- }
-
- // If there are no module-level passes that have to be run, we codegen as
- // each function is parsed.
- // FIXME: We can't figure this out until we know there are no always-inline
- // functions.
- // FIXME: This is disabled right now until bugs can be worked out. Reenable
- // this for fast -O0 compiles!
- if (!EmitIR && 0) {
- FunctionPassManager *PM = PerFunctionPasses;
- HasPerFunctionPasses = true;
-
- CodeGenOpt::Level OptLevel = CodeGenOpt::Default; // -O2, -Os, and -Oz
- if (optimize == 0)
- OptLevel = CodeGenOpt::None;
- else if (optimize == 1)
- OptLevel = CodeGenOpt::Less;
- else if (optimize == 3)
- // -O3 and above.
- OptLevel = CodeGenOpt::Aggressive;
-
- // Request that addPassesToEmitFile run the Verifier after running
- // passes which modify the IR.
-#ifndef NDEBUG
- bool DisableVerify = false;
-#else
- bool DisableVerify = true;
-#endif
-
- // Normal mode, emit a .s file by running the code generator.
- // Note, this also adds codegenerator level optimization passes.
- InitializeOutputStreams(false);
- if (TheTarget->addPassesToEmitFile(*PM, FormattedOutStream,
- TargetMachine::CGFT_AssemblyFile,
- OptLevel, DisableVerify)) {
- errs() << "Error interfacing to target machine!\n";
- exit(1);
- }
- }
-
- if (HasPerFunctionPasses) {
- PerFunctionPasses->doInitialization();
- } else {
- delete PerFunctionPasses;
- PerFunctionPasses = 0;
- }
-}
-
-static void createPerModuleOptimizationPasses() {
- if (PerModulePasses)
- // llvm_pch_write_init has already created the per module passes.
- return;
-
- // FIXME: AT -O0/O1, we should stream out functions at a time.
- PerModulePasses = new PassManager();
- PerModulePasses->add(new TargetData(*TheTarget->getTargetData()));
- bool HasPerModulePasses = false;
-
- if (!DisableLLVMOptimizations) {
- TargetLibraryInfo *TLI =
- new TargetLibraryInfo(Triple(TheModule->getTargetTriple()));
- if (flag_no_simplify_libcalls)
- TLI->disableAllFunctions();
- PerModulePasses->add(TLI);
-
- bool NeedAlwaysInliner = false;
- llvm::Pass *InliningPass = 0;
- if (flag_inline_small_functions && !flag_no_inline) {
- InliningPass = createFunctionInliningPass(); // Inline small functions
- } else {
- // If full inliner is not run, check if always-inline is needed to handle
- // functions that are marked as always_inline.
- // TODO: Consider letting the GCC inliner do this.
- for (Module::iterator I = TheModule->begin(), E = TheModule->end();
- I != E; ++I)
- if (I->hasFnAttr(Attribute::AlwaysInline)) {
- NeedAlwaysInliner = true;
- break;
- }
-
- if (NeedAlwaysInliner)
- InliningPass = createAlwaysInlinerPass(); // Inline always_inline funcs
- }
-
- HasPerModulePasses = true;
- createStandardModulePasses(PerModulePasses, optimize,
- optimize_size,
- flag_unit_at_a_time, flag_unroll_loops,
- !flag_no_simplify_libcalls, flag_exceptions,
- InliningPass);
- }
-
- if (EmitIR && 0) {
- // Emit an LLVM .bc file to the output. This is used when passed
- // -emit-llvm -c to the GCC driver.
- InitializeOutputStreams(true);
- PerModulePasses->add(createBitcodeWriterPass(*OutStream));
- HasPerModulePasses = true;
- } else if (EmitIR) {
- // Emit an LLVM .ll file to the output. This is used when passed
- // -emit-llvm -S to the GCC driver.
- InitializeOutputStreams(false);
- PerModulePasses->add(createPrintModulePass(OutStream));
- HasPerModulePasses = true;
- } else {
- // If there are passes we have to run on the entire module, we do codegen
- // as a separate "pass" after that happens.
- // However if there are no module-level passes that have to be run, we
- // codegen as each function is parsed.
- // FIXME: This is disabled right now until bugs can be worked out. Reenable
- // this for fast -O0 compiles!
- if (PerModulePasses || 1) {
- FunctionPassManager *PM = CodeGenPasses =
- new FunctionPassManager(TheModule);
- PM->add(new TargetData(*TheTarget->getTargetData()));
-
- CodeGenOpt::Level OptLevel = CodeGenOpt::Default;
-
- switch (optimize) {
- default: break;
- case 0: OptLevel = CodeGenOpt::None; break;
- case 3: OptLevel = CodeGenOpt::Aggressive; break;
- }
-
- // Request that addPassesToEmitFile run the Verifier after running
- // passes which modify the IR.
-#ifndef NDEBUG
- bool DisableVerify = false;
-#else
- bool DisableVerify = true;
-#endif
-
- // Normal mode, emit a .s file by running the code generator.
- // Note, this also adds codegenerator level optimization passes.
- InitializeOutputStreams(false);
- if (TheTarget->addPassesToEmitFile(*PM, FormattedOutStream,
- TargetMachine::CGFT_AssemblyFile,
- OptLevel, DisableVerify)) {
- errs() << "Error interfacing to target machine!\n";
- exit(1);
- }
- }
- }
-
- if (!HasPerModulePasses) {
- delete PerModulePasses;
- PerModulePasses = 0;
- }
-}
-
-//TODO/// llvm_asm_file_start - Start the .s file.
-//TODOvoid llvm_asm_file_start(void) {
-//TODO timevar_push(TV_LLVM_INIT);
-//TODO AsmOutStream = new oFILEstream(asm_out_file);
-//TODO // FIXME: disentangle ostream madness here. Kill off ostream and FILE.
-//TODO AsmOutRawStream =
-//TODO new formatted_raw_ostream(*new raw_os_ostream(*AsmOutStream),
-//TODO formatted_raw_ostream::DELETE_STREAM);
-//TODO
-//TODO flag_llvm_pch_read = 0;
-//TODO
-//TODO if (EmitIR)
-//TODO // Disable emission of .ident into the output file... which is completely
-//TODO // wrong for llvm/.bc emission cases.
-//TODO flag_no_ident = 1;
-//TODO
-//TODO // If writing to stdout, set binary mode.
-//TODO if (asm_out_file == stdout)
-//TODO sys::Program::ChangeStdoutToBinary();
-//TODO
-//TODO AttributeUsedGlobals.clear();
-//TODO AttributeCompilerUsedGlobals.clear();
-//TODO timevar_pop(TV_LLVM_INIT);
-//TODO}
-
-/// ConvertStructorsList - Convert a list of static ctors/dtors to an
-/// initializer suitable for the llvm.global_[cd]tors globals.
-static void CreateStructorsList(std::vector<std::pair<Constant*, int> > &Tors,
- const char *Name) {
- std::vector<Constant*> InitList;
- std::vector<Constant*> StructInit;
- StructInit.resize(2);
-
- LLVMContext &Context = getGlobalContext();
-
- const Type *FPTy =
- FunctionType::get(Type::getVoidTy(Context),
- std::vector<const Type*>(), false);
- FPTy = FPTy->getPointerTo();
-
- for (unsigned i = 0, e = Tors.size(); i != e; ++i) {
- StructInit[0] = ConstantInt::get(Type::getInt32Ty(Context), Tors[i].second);
-
- // __attribute__(constructor) can be on a function with any type. Make sure
- // the pointer is void()*.
- StructInit[1] = TheFolder->CreateBitCast(Tors[i].first, FPTy);
- InitList.push_back(ConstantStruct::get(Context, StructInit, false));
- }
- Constant *Array = ConstantArray::get(
- ArrayType::get(InitList[0]->getType(), InitList.size()), InitList);
- new GlobalVariable(*TheModule, Array->getType(), false,
- GlobalValue::AppendingLinkage,
- Array, Name);
-}
-
-/// ConvertMetadataStringToGV - Convert string to global value. Use existing
-/// global if possible.
-Constant* ConvertMetadataStringToGV(const char *str) {
-
- Constant *Init = ConstantArray::get(getGlobalContext(), std::string(str));
-
- // Use cached string if it exists.
- static std::map<Constant*, GlobalVariable*> StringCSTCache;
- GlobalVariable *&Slot = StringCSTCache[Init];
- if (Slot) return Slot;
-
- // Create a new string global.
- GlobalVariable *GV = new GlobalVariable(*TheModule, Init->getType(), true,
- GlobalVariable::PrivateLinkage,
- Init, ".str");
- GV->setSection("llvm.metadata");
- Slot = GV;
- return GV;
-
-}
-
-/// AddAnnotateAttrsToGlobal - Adds decls that have a annotate attribute to a
-/// vector to be emitted later.
-void AddAnnotateAttrsToGlobal(GlobalValue *GV, tree decl) {
- LLVMContext &Context = getGlobalContext();
-
- // Handle annotate attribute on global.
- tree annotateAttr = lookup_attribute("annotate", DECL_ATTRIBUTES (decl));
- if (annotateAttr == 0)
- return;
-
- // Get file and line number
- Constant *lineNo = ConstantInt::get(Type::getInt32Ty(Context),
- DECL_SOURCE_LINE(decl));
- Constant *file = ConvertMetadataStringToGV(DECL_SOURCE_FILE(decl));
- const Type *SBP = Type::getInt8PtrTy(Context);
- file = TheFolder->CreateBitCast(file, SBP);
-
- // There may be multiple annotate attributes. Pass return of lookup_attr
- // to successive lookups.
- while (annotateAttr) {
-
- // Each annotate attribute is a tree list.
- // Get value of list which is our linked list of args.
- tree args = TREE_VALUE(annotateAttr);
-
- // Each annotate attribute may have multiple args.
- // Treat each arg as if it were a separate annotate attribute.
- for (tree a = args; a; a = TREE_CHAIN(a)) {
- // Each element of the arg list is a tree list, so get value
- tree val = TREE_VALUE(a);
-
- // Assert its a string, and then get that string.
- assert(TREE_CODE(val) == STRING_CST &&
- "Annotate attribute arg should always be a string");
- Constant *strGV = EmitAddressOf(val);
- Constant *Element[4] = {
- TheFolder->CreateBitCast(GV,SBP),
- TheFolder->CreateBitCast(strGV,SBP),
- file,
- lineNo
- };
-
- AttributeAnnotateGlobals.push_back(
- ConstantStruct::get(Context, Element, 4, false));
- }
-
- // Get next annotate attribute.
- annotateAttr = TREE_CHAIN(annotateAttr);
- if (annotateAttr)
- annotateAttr = lookup_attribute("annotate", annotateAttr);
- }
-}
-
-/// emit_global - Emit the specified VAR_DECL or aggregate CONST_DECL to LLVM as
-/// a global variable. This function implements the end of assemble_variable.
-static void emit_global(tree decl) {
- // FIXME: Support alignment on globals: DECL_ALIGN.
- // FIXME: DECL_PRESERVE_P indicates the var is marked with attribute 'used'.
-
- // Global register variables don't turn into LLVM GlobalVariables.
- if (TREE_CODE(decl) == VAR_DECL && DECL_REGISTER(decl))
- return;
-
- // If we encounter a forward declaration then do not emit the global yet.
- if (!TYPE_SIZE(TREE_TYPE(decl)))
- return;
-
-//TODO timevar_push(TV_LLVM_GLOBALS);
-
- // Get or create the global variable now.
- GlobalVariable *GV = cast<GlobalVariable>(DECL_LLVM(decl));
-
- // Convert the initializer over.
- Constant *Init;
- if (DECL_INITIAL(decl) == 0 || DECL_INITIAL(decl) == error_mark_node) {
- // Reconvert the type in case the forward def of the global and the real def
- // differ in type (e.g. declared as 'int A[]', and defined as 'int A[100]').
- const Type *Ty = ConvertType(TREE_TYPE(decl));
- Init = getDefaultValue(Ty);
- } else {
- assert((TREE_CONSTANT(DECL_INITIAL(decl)) ||
- TREE_CODE(DECL_INITIAL(decl)) == STRING_CST) &&
- "Global initializer should be constant!");
-
- // Temporarily set an initializer for the global, so we don't infinitely
- // recurse. If we don't do this, we can hit cases where we see "oh a global
- // with an initializer hasn't been initialized yet, call emit_global on it".
- // When constructing the initializer it might refer to itself.
- // This can happen for things like void *G = &G;
- GV->setInitializer(UndefValue::get(GV->getType()->getElementType()));
- Init = ConvertConstant(DECL_INITIAL(decl));
- }
-
- // If we had a forward definition that has a type that disagrees with our
- // initializer, insert a cast now. This sort of thing occurs when we have a
- // global union, and the LLVM type followed a union initializer that is
- // different from the union element used for the type.
- if (GV->getType()->getElementType() != Init->getType()) {
- GV->removeFromParent();
- GlobalVariable *NGV = new GlobalVariable(*TheModule, Init->getType(),
- GV->isConstant(),
- GlobalValue::ExternalLinkage, 0,
- GV->getName());
- GV->replaceAllUsesWith(TheFolder->CreateBitCast(NGV, GV->getType()));
- changeLLVMConstant(GV, NGV);
- delete GV;
- SET_DECL_LLVM(decl, NGV);
- GV = NGV;
- }
-
- // Set the initializer.
- GV->setInitializer(Init);
-
- // Set thread local (TLS)
- if (TREE_CODE(decl) == VAR_DECL && DECL_THREAD_LOCAL_P(decl))
- GV->setThreadLocal(true);
-
- // Set the linkage.
- GlobalValue::LinkageTypes Linkage;
-
- if (CODE_CONTAINS_STRUCT (TREE_CODE (decl), TS_DECL_WITH_VIS)
- && false) {// FIXME DECL_LLVM_PRIVATE(decl)) {
- Linkage = GlobalValue::PrivateLinkage;
- } else if (CODE_CONTAINS_STRUCT (TREE_CODE (decl), TS_DECL_WITH_VIS)
- && false) {//FIXME DECL_LLVM_LINKER_PRIVATE(decl)) {
- Linkage = GlobalValue::LinkerPrivateLinkage;
- } else if (!TREE_PUBLIC(decl)) {
- Linkage = GlobalValue::InternalLinkage;
- } else if (DECL_WEAK(decl)) {
- // The user may have explicitly asked for weak linkage - ignore flag_odr.
- Linkage = GlobalValue::WeakAnyLinkage;
- } else if (DECL_ONE_ONLY(decl)) {
- Linkage = GlobalValue::getWeakLinkage(flag_odr);
- } else if (DECL_COMMON(decl) && // DECL_COMMON is only meaningful if no init
- (!DECL_INITIAL(decl) || DECL_INITIAL(decl) == error_mark_node)) {
- // llvm-gcc also includes DECL_VIRTUAL_P here.
- Linkage = GlobalValue::CommonLinkage;
- } else if (DECL_COMDAT(decl)) {
- Linkage = GlobalValue::getLinkOnceLinkage(flag_odr);
- } else {
- Linkage = GV->getLinkage();
- }
-
- // Allow loads from constants to be folded even if the constant has weak
- // linkage. Do this by giving the constant weak_odr linkage rather than
- // weak linkage. It is not clear whether this optimization is valid (see
- // gcc bug 36685), but mainline gcc chooses to do it, and fold may already
- // have done it, so we might as well join in with gusto.
- if (GV->isConstant()) {
- if (Linkage == GlobalValue::WeakAnyLinkage)
- Linkage = GlobalValue::WeakODRLinkage;
- else if (Linkage == GlobalValue::LinkOnceAnyLinkage)
- Linkage = GlobalValue::LinkOnceODRLinkage;
- }
- GV->setLinkage(Linkage);
-
-#ifdef TARGET_ADJUST_LLVM_LINKAGE
- TARGET_ADJUST_LLVM_LINKAGE(GV, decl);
-#endif /* TARGET_ADJUST_LLVM_LINKAGE */
-
- handleVisibility(decl, GV);
-
- // Set the section for the global.
- if (TREE_CODE(decl) == VAR_DECL) {
- if (DECL_SECTION_NAME(decl)) {
- GV->setSection(TREE_STRING_POINTER(DECL_SECTION_NAME(decl)));
-#ifdef LLVM_IMPLICIT_TARGET_GLOBAL_VAR_SECTION
- } else if (const char *Section =
- LLVM_IMPLICIT_TARGET_GLOBAL_VAR_SECTION(decl)) {
- GV->setSection(Section);
-#endif
- }
-
- // Set the alignment for the global if one of the following condition is met
- // 1) DECL_ALIGN is better than the alignment as per ABI specification
- // 2) DECL_ALIGN is set by user.
- if (DECL_ALIGN(decl)) {
- unsigned TargetAlign =
- getTargetData().getABITypeAlignment(GV->getType()->getElementType());
- if (DECL_USER_ALIGN(decl) ||
- 8 * TargetAlign < (unsigned)DECL_ALIGN(decl)) {
- GV->setAlignment(DECL_ALIGN(decl) / 8);
- }
-#ifdef TARGET_ADJUST_CSTRING_ALIGN
- else if (DECL_INITIAL(decl) != error_mark_node && // uninitialized?
- DECL_INITIAL(decl) &&
- TREE_CODE(DECL_INITIAL(decl)) == STRING_CST) {
- TARGET_ADJUST_CSTRING_ALIGN(GV);
- }
-#endif
- }
-
- // Handle used decls
- if (DECL_PRESERVE_P (decl)) {
- if (false)//FIXME DECL_LLVM_LINKER_PRIVATE (decl))
- AttributeCompilerUsedGlobals.insert(GV);
- else
- AttributeUsedGlobals.insert(GV);
- }
-
- // Add annotate attributes for globals
- if (DECL_ATTRIBUTES(decl))
- AddAnnotateAttrsToGlobal(GV, decl);
-
-#ifdef LLVM_IMPLICIT_TARGET_GLOBAL_VAR_SECTION
- } else if (TREE_CODE(decl) == CONST_DECL) {
- if (const char *Section =
- LLVM_IMPLICIT_TARGET_GLOBAL_VAR_SECTION(decl)) {
- GV->setSection(Section);
-
- /* LLVM LOCAL - begin radar 6389998 */
-#ifdef TARGET_ADJUST_CFSTRING_NAME
- TARGET_ADJUST_CFSTRING_NAME(GV, Section);
-#endif
- /* LLVM LOCAL - end radar 6389998 */
- }
-#endif
- }
-
- if (TheDebugInfo)
- TheDebugInfo->EmitGlobalVariable(GV, decl);
-
- // Sanity check that the LLVM global has the right size.
- assert(SizeOfGlobalMatchesDecl(GV, decl) && "Global has wrong size!");
-
- // Mark the global as written so gcc doesn't waste time outputting it.
- TREE_ASM_WRITTEN(decl) = 1;
-
-//TODO timevar_pop(TV_LLVM_GLOBALS);
-}
-
-
-/// ValidateRegisterVariable - Check that a static "asm" variable is
-/// well-formed. If not, emit error messages and return true. If so, return
-/// false.
-bool ValidateRegisterVariable(tree decl) {
- int RegNumber = decode_reg_name(extractRegisterName(decl));
-
- if (errorcount || sorrycount)
- return true; // Do not process broken code.
-
- /* Detect errors in declaring global registers. */
- if (RegNumber == -1)
- error("register name not specified for %q+D", decl);
- else if (RegNumber < 0)
- error("invalid register name for %q+D", decl);
- else if (TYPE_MODE(TREE_TYPE(decl)) == BLKmode)
- error("data type of %q+D isn%'t suitable for a register", decl);
-#if 0 // FIXME: enable this.
- else if (!HARD_REGNO_MODE_OK(RegNumber, TYPE_MODE(TREE_TYPE(decl))))
- error("register specified for %q+D isn%'t suitable for data type",
- decl);
-#endif
- else if (DECL_INITIAL(decl) != 0 && TREE_STATIC(decl))
- error("global register variable has initial value");
- else if (AGGREGATE_TYPE_P(TREE_TYPE(decl)))
- sorry("LLVM cannot handle register variable %q+D, report a bug",
- decl);
- else {
- if (TREE_THIS_VOLATILE(decl))
- warning(0, "volatile register variables don%'t work as you might wish");
-
- return false; // Everything ok.
- }
-
- return true;
-}
-
-
-/// make_decl_llvm - Create the DECL_RTL for a VAR_DECL or FUNCTION_DECL. DECL
-/// should have static storage duration. In other words, it should not be an
-/// automatic variable, including PARM_DECLs.
-///
-/// There is, however, one exception: this function handles variables explicitly
-/// placed in a particular register by the user.
-///
-/// This function corresponds to make_decl_rtl in varasm.c, and is implicitly
-/// called by DECL_LLVM if a decl doesn't have an LLVM set.
-Value *make_decl_llvm(tree decl) {
- // If we already made the LLVM, then return it.
- if (Value *V = get_decl_llvm(decl))
- return V;
-
-#ifdef ENABLE_CHECKING
- // Check that we are not being given an automatic variable.
- // A weak alias has TREE_PUBLIC set but not the other bits.
- if (TREE_CODE(decl) == PARM_DECL || TREE_CODE(decl) == RESULT_DECL
- || (TREE_CODE(decl) == VAR_DECL && !TREE_STATIC(decl) &&
- !TREE_PUBLIC(decl) && !DECL_EXTERNAL(decl) && !DECL_REGISTER(decl)))
- abort();
- // And that we were not given a type or a label. */
- else if (TREE_CODE(decl) == TYPE_DECL || TREE_CODE(decl) == LABEL_DECL)
- abort ();
-#endif
-
- if (errorcount || sorrycount)
- return NULL; // Do not process broken code.
-
- LLVMContext &Context = getGlobalContext();
-
- // Global register variable with asm name, e.g.:
- // register unsigned long esp __asm__("ebp");
- if (TREE_CODE(decl) != FUNCTION_DECL && DECL_REGISTER(decl)) {
- // This just verifies that the variable is ok. The actual "load/store"
- // code paths handle accesses to the variable.
- ValidateRegisterVariable(decl);
- return NULL;
- }
-
-//TODO timevar_push(TV_LLVM_GLOBALS);
-
- std::string Name;
- if (TREE_CODE(decl) != CONST_DECL) // CONST_DECLs do not have assembler names.
- Name = getLLVMAssemblerName(decl).str();
-
- // Now handle ordinary static variables and functions (in memory).
- // Also handle vars declared register invalidly.
- if (!Name.empty() && Name[0] == 1) {
-#ifdef REGISTER_PREFIX
- if (strlen (REGISTER_PREFIX) != 0) {
- int reg_number = decode_reg_name(Name);
- if (reg_number >= 0 || reg_number == -3)
- error("register name given for non-register variable %q+D", decl);
- }
-#endif
- }
-
- // Specifying a section attribute on a variable forces it into a
- // non-.bss section, and thus it cannot be common.
- if (TREE_CODE(decl) == VAR_DECL && DECL_SECTION_NAME(decl) != NULL_TREE &&
- DECL_INITIAL(decl) == NULL_TREE && DECL_COMMON(decl))
- DECL_COMMON(decl) = 0;
-
- // Variables can't be both common and weak.
- if (TREE_CODE(decl) == VAR_DECL && DECL_WEAK(decl))
- DECL_COMMON(decl) = 0;
-
- // Okay, now we need to create an LLVM global variable or function for this
- // object. Note that this is quite possibly a forward reference to the
- // object, so its type may change later.
- if (TREE_CODE(decl) == FUNCTION_DECL) {
- assert(!Name.empty() && "Function with empty name!");
- // If this function has already been created, reuse the decl. This happens
- // when we have something like __builtin_memset and memset in the same file.
- Function *FnEntry = TheModule->getFunction(Name);
- if (FnEntry == 0) {
- CallingConv::ID CC;
- AttrListPtr PAL;
- const FunctionType *Ty =
- TheTypeConverter->ConvertFunctionType(TREE_TYPE(decl), decl, NULL,
- CC, PAL);
- FnEntry = Function::Create(Ty, Function::ExternalLinkage, Name, TheModule);
- FnEntry->setCallingConv(CC);
- FnEntry->setAttributes(PAL);
-
- // Check for external weak linkage.
- if (DECL_EXTERNAL(decl) && DECL_WEAK(decl))
- FnEntry->setLinkage(Function::ExternalWeakLinkage);
-
-#ifdef TARGET_ADJUST_LLVM_LINKAGE
- TARGET_ADJUST_LLVM_LINKAGE(FnEntry,decl);
-#endif /* TARGET_ADJUST_LLVM_LINKAGE */
-
- handleVisibility(decl, FnEntry);
-
- // If FnEntry got renamed, then there is already an object with this name
- // in the symbol table. If this happens, the old one must be a forward
- // decl, just replace it with a cast of the new one.
- if (FnEntry->getName() != Name) {
- GlobalVariable *G = TheModule->getGlobalVariable(Name, true);
- assert(G && G->isDeclaration() && "A global turned into a function?");
-
- // Replace any uses of "G" with uses of FnEntry.
- Constant *GInNewType = TheFolder->CreateBitCast(FnEntry, G->getType());
- G->replaceAllUsesWith(GInNewType);
-
- // Update the decl that points to G.
- changeLLVMConstant(G, GInNewType);
-
- // Now we can give GV the proper name.
- FnEntry->takeName(G);
-
- // G is now dead, nuke it.
- G->eraseFromParent();
- }
- }
- return SET_DECL_LLVM(decl, FnEntry);
- } else {
- assert((TREE_CODE(decl) == VAR_DECL ||
- TREE_CODE(decl) == CONST_DECL) && "Not a function or var decl?");
- const Type *Ty = ConvertType(TREE_TYPE(decl));
- GlobalVariable *GV ;
-
- // If we have "extern void foo", make the global have type {} instead of
- // type void.
- if (Ty->isVoidTy())
- Ty = StructType::get(Context);
-
- if (Name.empty()) { // Global has no name.
- GV = new GlobalVariable(*TheModule, Ty, false,
- GlobalValue::ExternalLinkage, 0, "");
-
- // Check for external weak linkage.
- if (DECL_EXTERNAL(decl) && DECL_WEAK(decl))
- GV->setLinkage(GlobalValue::ExternalWeakLinkage);
-
-#ifdef TARGET_ADJUST_LLVM_LINKAGE
- TARGET_ADJUST_LLVM_LINKAGE(GV,decl);
-#endif /* TARGET_ADJUST_LLVM_LINKAGE */
-
- handleVisibility(decl, GV);
- } else {
- // If the global has a name, prevent multiple vars with the same name from
- // being created.
- GlobalVariable *GVE = TheModule->getGlobalVariable(Name, true);
-
- if (GVE == 0) {
- GV = new GlobalVariable(*TheModule, Ty, false,
- GlobalValue::ExternalLinkage, 0, Name);
-
- // Check for external weak linkage.
- if (DECL_EXTERNAL(decl) && DECL_WEAK(decl))
- GV->setLinkage(GlobalValue::ExternalWeakLinkage);
-
-#ifdef TARGET_ADJUST_LLVM_LINKAGE
- TARGET_ADJUST_LLVM_LINKAGE(GV,decl);
-#endif /* TARGET_ADJUST_LLVM_LINKAGE */
-
- handleVisibility(decl, GV);
-
- // If GV got renamed, then there is already an object with this name in
- // the symbol table. If this happens, the old one must be a forward
- // decl, just replace it with a cast of the new one.
- if (GV->getName() != Name) {
- Function *F = TheModule->getFunction(Name);
- assert(F && F->isDeclaration() && "A function turned into a global?");
-
- // Replace any uses of "F" with uses of GV.
- Constant *FInNewType = TheFolder->CreateBitCast(GV, F->getType());
- F->replaceAllUsesWith(FInNewType);
-
- // Update the decl that points to F.
- changeLLVMConstant(F, FInNewType);
-
- // Now we can give GV the proper name.
- GV->takeName(F);
-
- // F is now dead, nuke it.
- F->eraseFromParent();
- }
-
- } else {
- GV = GVE; // Global already created, reuse it.
- }
- }
-
- if ((TREE_READONLY(decl) && !TREE_SIDE_EFFECTS(decl)) ||
- TREE_CODE(decl) == CONST_DECL) {
- if (DECL_EXTERNAL(decl)) {
- // Mark external globals constant even though they could be marked
- // non-constant in the defining translation unit. The definition of the
- // global determines whether the global is ultimately constant or not,
- // marking this constant will allow us to do some extra (legal)
- // optimizations that we would otherwise not be able to do. (In C++,
- // any global that is 'C++ const' may not be readonly: it could have a
- // dynamic initializer.
- //
- GV->setConstant(true);
- } else {
- // Mark readonly globals with constant initializers constant.
- if (DECL_INITIAL(decl) != error_mark_node && // uninitialized?
- DECL_INITIAL(decl) &&
- (TREE_CONSTANT(DECL_INITIAL(decl)) ||
- TREE_CODE(DECL_INITIAL(decl)) == STRING_CST))
- GV->setConstant(true);
- }
- }
-
- // Set thread local (TLS)
- if (TREE_CODE(decl) == VAR_DECL && DECL_THREAD_LOCAL_P(decl))
- GV->setThreadLocal(true);
-
- assert((GV->isDeclaration() || SizeOfGlobalMatchesDecl(GV, decl)) &&
- "Global has unexpected initializer!");
-
- return SET_DECL_LLVM(decl, GV);
- }
-//TODO timevar_pop(TV_LLVM_GLOBALS);
-}
-
-/// make_definition_llvm - Ensures that the body or initial value of the given
-/// GCC global will be output, and returns a declaration for it.
-Value *make_definition_llvm(tree decl) {
- // Only need to do something special for global variables.
- if (TREE_CODE(decl) != CONST_DECL && TREE_CODE(decl) != VAR_DECL)
- return DECL_LLVM(decl);
- // Do not allocate storage for external references (eg: a "weakref" alias).
- if (DECL_EXTERNAL(decl))
- return DECL_LLVM(decl);
- // Can only assign initial values to global variables in static storage.
- if (!TREE_STATIC(decl)) {
- assert(!DECL_INITIAL(decl) && "Non-static global has initial value!");
- return DECL_LLVM(decl);
- }
- GlobalValue *GV = cast<GlobalValue>(DECL_LLVM(decl));
- // If we already output a definition for this declaration, then reuse it.
- if (!GV->isDeclaration())
- return GV;
- emit_global(decl);
- return DECL_LLVM(decl); // Decl could have changed if it changed type.
-}
-
-/// register_ctor_dtor - Called to register static ctors/dtors with LLVM.
-/// Fn is a 'void()' ctor/dtor function to be run, initprio is the init
-/// priority, and isCtor indicates whether this is a ctor or dtor.
-void register_ctor_dtor(Function *Fn, int InitPrio, bool isCtor) {
- (isCtor ? &StaticCtors:&StaticDtors)->push_back(std::make_pair(Fn, InitPrio));
-}
-
-//FIXME/// print_llvm - Print the specified LLVM chunk like an operand, called by
-//FIXME/// print-tree.c for tree dumps.
-//FIXMEvoid print_llvm(FILE *file, void *LLVM) {
-//FIXME oFILEstream FS(file);
-//FIXME FS << "LLVM: ";
-//FIXME WriteAsOperand(FS, (Value*)LLVM, true, TheModule);
-//FIXME}
-//FIXME
-//FIXME/// print_llvm_type - Print the specified LLVM type symbolically, called by
-//FIXME/// print-tree.c for tree dumps.
-//FIXMEvoid print_llvm_type(FILE *file, void *LLVM) {
-//FIXME oFILEstream FS(file);
-//FIXME FS << "LLVM: ";
-//FIXME
-//FIXME // FIXME: oFILEstream can probably be removed in favor of a new raw_ostream
-//FIXME // adaptor which would be simpler and more efficient. In the meantime, just
-//FIXME // adapt the adaptor.
-//FIXME raw_os_ostream RO(FS);
-//FIXME WriteTypeSymbolic(RO, (const Type*)LLVM, TheModule);
-//FIXME}
-
-/// extractRegisterName - Get a register name given its decl. In 4.2 unlike 4.0
-/// these names have been run through set_user_assembler_name which means they
-/// may have a leading star at this point; compensate.
-const char* extractRegisterName(tree decl) {
- const char* Name = IDENTIFIER_POINTER(DECL_ASSEMBLER_NAME(decl));
- return (*Name == '*') ? Name + 1 : Name;
-}
-
-/// getLLVMAssemblerName - Get the assembler name (DECL_ASSEMBLER_NAME) for the
-/// declaration, with any leading star replaced by '\1'.
-Twine getLLVMAssemblerName(union tree_node *decl) {
- tree Ident = DECL_ASSEMBLER_NAME(decl);
- if (!Ident)
- return "";
-
- const char *Name = IDENTIFIER_POINTER(Ident);
- if (*Name != '*')
- return Name;
-
- return "\1" + Twine(Name + 1);
-}
-
-/// FinalizePlugin - Shutdown the plugin.
-static void FinalizePlugin(void) {
- static bool Finalized = false;
- if (Finalized)
- return;
-
-#ifndef NDEBUG
- delete PerModulePasses;
- delete PerFunctionPasses;
- delete CodeGenPasses;
- delete TheModule;
- llvm_shutdown();
-#endif
-
- Finalized = true;
-}
-
-/// TakeoverAsmOutput - Obtain exclusive use of the assembly code output file.
-/// Any GCC output will be thrown away.
-static void TakeoverAsmOutput(void) {
- // Calculate the output file name as in init_asm_output (toplev.c).
- if (!dump_base_name && main_input_filename)
- dump_base_name = main_input_filename[0] ? main_input_filename : "gccdump";
-
- if (!main_input_filename && !asm_file_name) {
- llvm_asm_file_name = "-";
- } else if (!asm_file_name) {
- int len = strlen(dump_base_name);
- char *dumpname = XNEWVEC(char, len + 6);
-
- memcpy(dumpname, dump_base_name, len + 1);
- strip_off_ending(dumpname, len);
- strcat(dumpname, ".s");
- llvm_asm_file_name = dumpname;
- } else {
- llvm_asm_file_name = asm_file_name;
- }
-
- if (!SaveGCCOutput) {
- // Redirect any GCC output to /dev/null.
- asm_file_name = HOST_BIT_BUCKET;
- } else {
- // Save GCC output to a special file. Good for seeing how much pointless
- // output gcc is producing.
- int len = strlen(llvm_asm_file_name);
- char *name = XNEWVEC(char, len + 5);
- memcpy(name, llvm_asm_file_name, len + 1);
- asm_file_name = strcat(name, ".gcc");
- }
-}
-
-
-//===----------------------------------------------------------------------===//
-// Plugin interface
-//===----------------------------------------------------------------------===//
-
-// This plugin's code is licensed under the GPLv2 or later. The LLVM libraries
-// use the GPL compatible University of Illinois/NCSA Open Source License. The
-// plugin is GPL compatible.
-int plugin_is_GPL_compatible __attribute__ ((visibility("default")));
-
-
-/// llvm_start_unit - Perform late initialization. This is called by GCC just
-/// before processing the compilation unit.
-/// NOTE: called even when only doing syntax checking, so do not initialize the
-/// module etc here.
-static void llvm_start_unit(void * /*gcc_data*/, void * /*user_data*/) {
- if (!quiet_flag)
- errs() << "Starting compilation unit\n";
-
-#ifdef ENABLE_LTO
- // Output LLVM IR if the user requested generation of lto data.
- EmitIR |= flag_generate_lto != 0;
- // We have the same needs as GCC's LTO. Always claim to be doing LTO.
- flag_lto = 1;
- flag_whopr = 0;
- flag_generate_lto = 1;
- flag_whole_program = 0;
-#else
-# error "LTO support required but not enabled in GCC"
-#endif
-
- // Stop GCC outputting serious amounts of debug info.
- debug_hooks = &do_nothing_debug_hooks;
-}
-
-
-/// gate_emission - Whether to turn gimple into LLVM IR.
-static bool gate_emission(void) {
- // Don't bother doing anything if the program has errors.
- return !errorcount && !sorrycount; // Do not process broken code.
-}
-
-/// emit_current_function - Turn the current gimple function into LLVM IR. This
-/// is called once for each function in the compilation unit.
-static void emit_current_function() {
- if (!quiet_flag && DECL_NAME(current_function_decl))
- errs() << IDENTIFIER_POINTER(DECL_NAME(current_function_decl));
-
- // Convert the AST to raw/ugly LLVM code.
- Function *Fn;
- {
- TreeToLLVM Emitter(current_function_decl);
- Fn = Emitter.EmitFunction();
- }
-
- if (!errorcount && !sorrycount) { // Do not process broken code.
- createPerFunctionOptimizationPasses();
-
- if (PerFunctionPasses)
- PerFunctionPasses->run(*Fn);
-
- // TODO: Nuke the .ll code for the function at -O[01] if we don't want to
- // inline it or something else.
- }
-}
-
-/// emit_function - Turn a gimple function into LLVM IR. This is called once
-/// for each function in the compilation unit if GCC optimizations are disabled.
-static void emit_function(struct cgraph_node *node) {
- if (errorcount || sorrycount)
- return; // Do not process broken code.
-
- tree function = node->decl;
- struct function *fn = DECL_STRUCT_FUNCTION(function);
-
- // Set the current function to this one.
- // TODO: Make it so we don't need to do this.
- assert(current_function_decl == NULL_TREE && cfun == NULL &&
- "Current function already set!");
- current_function_decl = function;
- push_cfun (fn);
-
- // Convert the function.
- emit_current_function();
-
- // Done with this function.
- current_function_decl = NULL;
- pop_cfun ();
-}
-
-/// GetLinkageForAlias - The given GCC declaration is an alias or thunk. Return
-/// the appropriate LLVM linkage type for it.
-static GlobalValue::LinkageTypes GetLinkageForAlias(tree decl) {
- if (DECL_COMDAT(decl))
- // Need not be put out unless needed in this translation unit.
- return GlobalValue::InternalLinkage;
-
- if (DECL_ONE_ONLY(decl))
- // Copies of this DECL in multiple translation units should be merged.
- return GlobalValue::getWeakLinkage(flag_odr);
-
- if (DECL_WEAK(decl))
- // The user may have explicitly asked for weak linkage - ignore flag_odr.
- return GlobalValue::WeakAnyLinkage;
-
- if (!TREE_PUBLIC(decl))
- // Not accessible from outside this translation unit.
- return GlobalValue::InternalLinkage;
-
- if (DECL_EXTERNAL(decl))
- // Do not allocate storage, and refer to a definition elsewhere.
- return GlobalValue::InternalLinkage;
-
- return GlobalValue::ExternalLinkage;
-}
-
-/// ApplyVirtualOffset - Adjust 'this' by a virtual offset.
-static Value *ApplyVirtualOffset(Value *This, HOST_WIDE_INT virtual_value,
- LLVMBuilder &Builder) {
- LLVMContext &Context = getGlobalContext();
- const Type *BytePtrTy = Type::getInt8PtrTy(Context); // i8*
- const Type *HandleTy = BytePtrTy->getPointerTo(); // i8**
- const Type *IntPtrTy = TheTarget->getTargetData()->getIntPtrType(Context);
-
- // The vptr is always at offset zero in the object.
- Value *VPtr = Builder.CreateBitCast(This, HandleTy->getPointerTo()); // i8***
-
- // Form the vtable address.
- Value *VTableAddr = Builder.CreateLoad(VPtr); // i8**
-
- // Find the entry with the vcall offset.
- Value *VOffset = ConstantInt::get(IntPtrTy, virtual_value);
- VTableAddr = Builder.CreateBitCast(VTableAddr, BytePtrTy);
- VTableAddr = Builder.CreateInBoundsGEP(VTableAddr, VOffset);
- VTableAddr = Builder.CreateBitCast(VTableAddr, HandleTy); // i8**
-
- // Get the offset itself.
- Value *VCallOffset = Builder.CreateLoad(VTableAddr); // i8*
- VCallOffset = Builder.CreatePtrToInt(VCallOffset, IntPtrTy);
-
- // Adjust the 'this' pointer.
- Value *Adjusted = Builder.CreateBitCast(This, BytePtrTy);
- Adjusted = Builder.CreateInBoundsGEP(Adjusted, VCallOffset);
- return Builder.CreateBitCast(Adjusted, This->getType());
-}
-
-/// emit_thunk - Turn a thunk into LLVM IR.
-static void emit_thunk(struct cgraph_node *node) {
- if (errorcount || sorrycount)
- return; // Do not process broken code.
-
- Function *Thunk = cast<Function>(DECL_LLVM(node->decl));
- if (Thunk->isVarArg()) {
- sorry("thunks to varargs functions not supported");
- return;
- }
-
- // Mark the thunk as written so gcc doesn't waste time outputting it.
- TREE_ASM_WRITTEN(node->decl) = 1;
-
- // Set the linkage and visibility.
- Thunk->setLinkage(GetLinkageForAlias(node->decl));
- handleVisibility(node->decl, Thunk);
-
- // Whether the thunk adjusts 'this' before calling the thunk alias (otherwise
- // it is the value returned by the alias that is adjusted).
- bool ThisAdjusting = node->thunk.this_adjusting;
-
- LLVMContext &Context = getGlobalContext();
- const Type *BytePtrTy = Type::getInt8Ty(Context)->getPointerTo();
- const Type *IntPtrTy = TheTarget->getTargetData()->getIntPtrType(Context);
- LLVMBuilder Builder(Context, *TheFolder);
- Builder.SetInsertPoint(BasicBlock::Create(Context, "entry", Thunk));
-
- // Whether we found 'this' yet. When not 'this adjusting', setting this to
- // 'true' means all parameters (including 'this') are passed through as is.
- bool FoundThis = !ThisAdjusting;
-
- SmallVector<Value *, 16> Arguments;
- for (Function::arg_iterator AI = Thunk->arg_begin(), AE = Thunk->arg_end();
- AI != AE; ++AI) {
- // While 'this' is always the first GCC argument, we may have introduced
- // additional artificial arguments for doing struct return or passing a
- // nested function static chain. Look for 'this' while passing through
- // all arguments except for 'this' unchanged.
- if (FoundThis || AI->hasStructRetAttr() || AI->hasNestAttr()) {
- Arguments.push_back(AI);
- continue;
- }
-
- FoundThis = true; // The current argument is 'this'.
- assert(AI->getType()->isPointerTy() && "Wrong type for 'this'!");
- Value *This = AI;
-
- // Adjust 'this' according to the thunk offsets. First, the fixed offset.
- if (node->thunk.fixed_offset) {
- Value *Offset = ConstantInt::get(IntPtrTy, node->thunk.fixed_offset);
- This = Builder.CreateBitCast(This, BytePtrTy);
- This = Builder.CreateInBoundsGEP(This, Offset);
- This = Builder.CreateBitCast(This, AI->getType());
- }
-
- // Then by the virtual offset, if any.
- if (node->thunk.virtual_offset_p)
- This = ApplyVirtualOffset(This, node->thunk.virtual_value, Builder);
-
- Arguments.push_back(This);
- }
-
- CallInst *Call = Builder.CreateCall(DECL_LLVM(node->thunk.alias),
- Arguments.begin(), Arguments.end());
- Call->setCallingConv(Thunk->getCallingConv());
- Call->setAttributes(Thunk->getAttributes());
- // All parameters except 'this' are passed on unchanged - this is a tail call.
- Call->setTailCall();
-
- if (ThisAdjusting) {
- // Return the value unchanged.
- if (Thunk->getReturnType()->isVoidTy())
- Builder.CreateRetVoid();
- else
- Builder.CreateRet(Call);
- return;
- }
-
- // Covariant return thunk - adjust the returned value by the thunk offsets.
- assert(Call->getType()->isPointerTy() && "Only know how to adjust pointers!");
- Value *RetVal = Call;
-
- // First check if the returned value is NULL.
- Value *Zero = Constant::getNullValue(RetVal->getType());
- Value *isNull = Builder.CreateICmpEQ(RetVal, Zero);
-
- BasicBlock *isNullBB = BasicBlock::Create(Context, "isNull", Thunk);
- BasicBlock *isNotNullBB = BasicBlock::Create(Context, "isNotNull", Thunk);
- Builder.CreateCondBr(isNull, isNullBB, isNotNullBB);
-
- // If it is NULL, return it without any adjustment.
- Builder.SetInsertPoint(isNullBB);
- Builder.CreateRet(Zero);
-
- // Otherwise, first adjust by the virtual offset, if any.
- Builder.SetInsertPoint(isNotNullBB);
- if (node->thunk.virtual_offset_p)
- RetVal = ApplyVirtualOffset(RetVal, node->thunk.virtual_value, Builder);
-
- // Then move 'this' by the fixed offset.
- if (node->thunk.fixed_offset) {
- Value *Offset = ConstantInt::get(IntPtrTy, node->thunk.fixed_offset);
- RetVal = Builder.CreateBitCast(RetVal, BytePtrTy);
- RetVal = Builder.CreateInBoundsGEP(RetVal, Offset);
- RetVal = Builder.CreateBitCast(RetVal, Thunk->getReturnType());
- }
-
- // Return the adjusted value.
- Builder.CreateRet(RetVal);
-}
-
-/// emit_alias - Given decl and target emit alias to target.
-static void emit_alias(tree decl, tree target) {
- if (errorcount || sorrycount)
- return; // Do not process broken code.
-
- // Get or create LLVM global for our alias.
- GlobalValue *V = cast<GlobalValue>(DECL_LLVM(decl));
-
- bool weakref = lookup_attribute("weakref", DECL_ATTRIBUTES(decl));
- if (weakref)
- while (IDENTIFIER_TRANSPARENT_ALIAS(target))
- target = TREE_CHAIN(target);
-
- if (TREE_CODE(target) == IDENTIFIER_NODE) {
- if (struct cgraph_node *fnode = cgraph_node_for_asm(target))
- target = fnode->decl;
- else if (struct varpool_node *vnode = varpool_node_for_asm(target))
- target = vnode->decl;
- }
-
- GlobalValue *Aliasee = 0;
- if (TREE_CODE(target) == IDENTIFIER_NODE) {
- if (!weakref) {
- error("%q+D aliased to undefined symbol %qs", decl,
- IDENTIFIER_POINTER(target));
- return;
- }
-
- // weakref to external symbol.
- if (GlobalVariable *GV = dyn_cast<GlobalVariable>(V))
- Aliasee = new GlobalVariable(*TheModule, GV->getType(),
- GV->isConstant(),
- GlobalVariable::ExternalWeakLinkage, NULL,
- IDENTIFIER_POINTER(target));
- else if (Function *F = dyn_cast<Function>(V))
- Aliasee = Function::Create(F->getFunctionType(),
- Function::ExternalWeakLinkage,
- IDENTIFIER_POINTER(target),
- TheModule);
- else
- assert(0 && "Unsuported global value");
- } else {
- Aliasee = cast<GlobalValue>(DEFINITION_LLVM(target));
- }
-
- GlobalValue::LinkageTypes Linkage = GetLinkageForAlias(decl);
-
- if (Linkage != GlobalValue::InternalLinkage) {
- // Create the LLVM alias.
- GlobalAlias* GA = new GlobalAlias(Aliasee->getType(), Linkage, "",
- Aliasee, TheModule);
- handleVisibility(decl, GA);
-
- // Associate it with decl instead of V.
- V->replaceAllUsesWith(ConstantExpr::getBitCast(GA, V->getType()));
- changeLLVMConstant(V, GA);
- GA->takeName(V);
- } else {
- // Make all users of the alias directly use the aliasee instead.
- V->replaceAllUsesWith(ConstantExpr::getBitCast(Aliasee, V->getType()));
- changeLLVMConstant(V, Aliasee);
- }
-
- V->eraseFromParent();
-
- // Mark the alias as written so gcc doesn't waste time outputting it.
- TREE_ASM_WRITTEN(decl) = 1;
-}
-
-/// emit_same_body_alias - Turn a same-body alias into LLVM IR.
-static void emit_same_body_alias(struct cgraph_node *alias,
- struct cgraph_node * /*target*/) {
- if (errorcount || sorrycount)
- return; // Do not process broken code.
-
- // If the target is not "extern inline" then output an ordinary alias.
- tree target = alias->thunk.alias;
- if (!DECL_EXTERNAL(target)) {
- emit_alias(alias->decl, target);
- return;
- }
-
- // Same body aliases have the property that if the body of the aliasee is not
- // output then neither are the aliases. To arrange this for "extern inline"
- // functions, which have AvailableExternally linkage in LLVM, make all users
- // of the alias directly use the aliasee instead.
- GlobalValue *Alias = cast<GlobalValue>(DECL_LLVM(alias->decl));
- GlobalValue *Aliasee = cast<GlobalValue>(DEFINITION_LLVM(target));
- Alias->replaceAllUsesWith(ConstantExpr::getBitCast(Aliasee,Alias->getType()));
- changeLLVMConstant(Alias, Aliasee);
- Alias->eraseFromParent();
-
- // Mark the alias as written so gcc doesn't waste time outputting it.
- TREE_ASM_WRITTEN(alias->decl) = 1;
-}
-
-/// emit_file_scope_asm - Emit the specified string as a file-scope inline
-/// asm block.
-static void emit_file_scope_asm(tree string) {
- if (errorcount || sorrycount)
- return; // Do not process broken code.
-
- if (TREE_CODE(string) == ADDR_EXPR)
- string = TREE_OPERAND(string, 0);
- TheModule->appendModuleInlineAsm(TREE_STRING_POINTER (string));
-}
-
-/// emit_functions - Turn all functions in the compilation unit into LLVM IR.
-static void emit_functions(cgraph_node_set set
-#if (GCC_MINOR > 5)
- , varpool_node_set /*vset*/
-#endif
- ) {
- if (errorcount || sorrycount)
- return; // Do not process broken code.
-
- InitializeBackend();
-
- // Visit each function with a body, outputting it only once (the same function
- // can appear in multiple cgraph nodes due to cloning).
- SmallPtrSet<tree, 32> Visited;
- for (cgraph_node_set_iterator csi = csi_start(set); !csi_end_p(csi);
- csi_next(&csi)) {
- struct cgraph_node *node = csi_node(csi);
- if (node->analyzed && Visited.insert(node->decl))
- // If GCC optimizations are enabled then functions are output later, in
- // place of gimple to RTL conversion.
- if (!EnableGCCOptimizations)
- emit_function(node);
-
- // Output any same-body aliases or thunks in the order they were created.
- struct cgraph_node *alias, *next;
- for (alias = node->same_body; alias && alias->next; alias = alias->next);
- for (; alias; alias = next) {
- next = alias->previous;
- if (alias->thunk.thunk_p)
- emit_thunk(alias);
- else
- emit_same_body_alias(alias, node);
- }
- }
-
- // Emit any file-scope asms.
- for (struct cgraph_asm_node *can = cgraph_asm_nodes; can; can = can->next)
- emit_file_scope_asm(can->asm_str);
-
- // Remove the asms so gcc doesn't waste time outputting them.
- cgraph_asm_nodes = NULL;
-}
-
-/// pass_emit_functions - IPA pass that turns gimple functions into LLVM IR.
-static struct ipa_opt_pass_d pass_emit_functions = {
- {
- IPA_PASS,
- "emit_functions", /* name */
- gate_emission, /* gate */
- NULL, /* execute */
- NULL, /* sub */
- NULL, /* next */
- 0, /* static_pass_number */
- TV_NONE, /* tv_id */
- 0, /* properties_required */
- 0, /* properties_provided */
- 0, /* properties_destroyed */
- 0, /* todo_flags_start */
- 0 /* todo_flags_finish */
- },
- NULL, /* generate_summary */
- emit_functions, /* write_summary */
- NULL, /* read_summary */
-#if (GCC_MINOR > 5)
- NULL, /* write_optimization_summary */
- NULL, /* read_optimization_summary */
-#else
- NULL, /* function_read_summary */
-#endif
- NULL, /* stmt_fixup */
- 0, /* function_transform_todo_flags_start */
- NULL, /* function_transform */
- NULL /* variable_transform */
-};
-
-/// emit_variables - Output GCC global variables to the LLVM IR.
-static void emit_variables(cgraph_node_set /*set*/
-#if (GCC_MINOR > 5)
- , varpool_node_set /*vset*/
-#endif
- ) {
- if (errorcount || sorrycount)
- return; // Do not process broken code.
-
- InitializeBackend();
-
- // Output all externally visible global variables, whether they are used in
- // this compilation unit or not, as well as any internal variables explicitly
- // marked with the 'used' attribute. All other internal variables are output
- // when their user is, or discarded if unused.
- struct varpool_node *vnode;
- FOR_EACH_STATIC_VARIABLE (vnode) {
- tree var = vnode->decl;
- if (TREE_CODE(var) == VAR_DECL &&
- (TREE_PUBLIC(var) || DECL_PRESERVE_P(var)))
- emit_global(var);
- }
-
- // Emit any aliases.
- alias_pair *p;
- for (unsigned i = 0; VEC_iterate(alias_pair, alias_pairs, i, p); i++)
- emit_alias(p->decl, p->target);
-}
-
-/// pass_emit_variables - IPA pass that turns GCC variables into LLVM IR.
-static struct ipa_opt_pass_d pass_emit_variables = {
- {
- IPA_PASS,
- "emit_variables", /* name */
- gate_emission, /* gate */
- NULL, /* execute */
- NULL, /* sub */
- NULL, /* next */
- 0, /* static_pass_number */
- TV_NONE, /* tv_id */
- 0, /* properties_required */
- 0, /* properties_provided */
- 0, /* properties_destroyed */
- 0, /* todo_flags_start */
- 0 /* todo_flags_finish */
- },
- NULL, /* generate_summary */
- emit_variables, /* write_summary */
- NULL, /* read_summary */
-#if (GCC_MINOR > 5)
- NULL, /* write_optimization_summary */
- NULL, /* read_optimization_summary */
-#else
- NULL, /* function_read_summary */
-#endif
- NULL, /* stmt_fixup */
- 0, /* function_transform_todo_flags_start */
- NULL, /* function_transform */
- NULL /* variable_transform */
-};
-
-/// disable_rtl - Mark the current function as having been written to assembly.
-static unsigned int disable_rtl(void) {
- // Free any data structures.
- execute_free_datastructures();
-
- // Mark the function as written.
- TREE_ASM_WRITTEN(current_function_decl) = 1;
-
- // That's all folks!
- return 0;
-}
-
-/// pass_disable_rtl - RTL pass that pretends to codegen functions, but actually
-/// only does hoop jumping required by GCC.
-static struct rtl_opt_pass pass_disable_rtl =
-{
- {
- RTL_PASS,
- "disable_rtl", /* name */
- NULL, /* gate */
- disable_rtl, /* execute */
- NULL, /* sub */
- NULL, /* next */
- 0, /* static_pass_number */
- TV_NONE, /* tv_id */
- 0, /* properties_required */
- 0, /* properties_provided */
- PROP_ssa | PROP_trees, /* properties_destroyed */
- 0, /* todo_flags_start */
- 0 /* todo_flags_finish */
- }
-};
-
-/// rtl_emit_function - Turn a gimple function into LLVM IR. This is called
-/// once for each function in the compilation unit if GCC optimizations are
-/// enabled.
-static unsigned int rtl_emit_function (void) {
- InitializeBackend();
-
- // Convert the function.
- emit_current_function();
-
- // Free any data structures.
- execute_free_datastructures();
-
- // Finally, we have written out this function!
- TREE_ASM_WRITTEN(current_function_decl) = 1;
- return 0;
-}
-
-/// pass_rtl_emit_function - RTL pass that converts a function to LLVM IR.
-static struct rtl_opt_pass pass_rtl_emit_function =
-{
- {
- RTL_PASS,
- "rtl_emit_function", /* name */
- gate_emission, /* gate */
- rtl_emit_function, /* execute */
- NULL, /* sub */
- NULL, /* next */
- 0, /* static_pass_number */
- TV_NONE, /* tv_id */
- PROP_ssa | PROP_gimple_leh | PROP_gimple_lomp
- | PROP_cfg, /* properties_required */
- 0, /* properties_provided */
- PROP_ssa | PROP_trees, /* properties_destroyed */
- TODO_verify_ssa | TODO_verify_flow
- | TODO_verify_stmts, /* todo_flags_start */
- TODO_ggc_collect /* todo_flags_finish */
- }
-};
-
-
-/// llvm_finish - Run shutdown code when GCC exits.
-static void llvm_finish(void * /*gcc_data*/, void * /*user_data*/) {
- FinalizePlugin();
-}
-
-/// llvm_finish_unit - Finish the .s file. This is called by GCC once the
-/// compilation unit has been completely processed.
-static void llvm_finish_unit(void * /*gcc_data*/, void * /*user_data*/) {
- if (errorcount || sorrycount)
- return; // Do not process broken code.
-
- if (!quiet_flag)
- errs() << "Finishing compilation unit\n";
-
- InitializeBackend();
-
-//TODO timevar_push(TV_LLVM_PERFILE);
- LLVMContext &Context = getGlobalContext();
-
- createPerFunctionOptimizationPasses();
-//TODO
-//TODO if (flag_pch_file) {
-//TODO writeLLVMTypesStringTable();
-//TODO writeLLVMValues();
-//TODO }
-
-//TODO for (Module::iterator I = TheModule->begin(), E = TheModule->end();
-//TODO I != E; ++I)
-//TODO if (!I->isDeclaration()) {
-//TODO if (flag_disable_red_zone)
-//TODO I->addFnAttr(Attribute::NoRedZone);
-//TODO if (flag_no_implicit_float)
-//TODO I->addFnAttr(Attribute::NoImplicitFloat);
-//TODO }
-
- // Add an llvm.global_ctors global if needed.
- if (!StaticCtors.empty())
- CreateStructorsList(StaticCtors, "llvm.global_ctors");
- // Add an llvm.global_dtors global if needed.
- if (!StaticDtors.empty())
- CreateStructorsList(StaticDtors, "llvm.global_dtors");
-
- if (!AttributeUsedGlobals.empty()) {
- std::vector<Constant *> AUGs;
- const Type *SBP = Type::getInt8PtrTy(Context);
- for (SmallSetVector<Constant *,32>::iterator
- AI = AttributeUsedGlobals.begin(),
- AE = AttributeUsedGlobals.end(); AI != AE; ++AI) {
- Constant *C = *AI;
- AUGs.push_back(TheFolder->CreateBitCast(C, SBP));
- }
-
- ArrayType *AT = ArrayType::get(SBP, AUGs.size());
- Constant *Init = ConstantArray::get(AT, AUGs);
- GlobalValue *gv = new GlobalVariable(*TheModule, AT, false,
- GlobalValue::AppendingLinkage, Init,
- "llvm.used");
- gv->setSection("llvm.metadata");
- AttributeUsedGlobals.clear();
- }
-
- if (!AttributeCompilerUsedGlobals.empty()) {
- std::vector<Constant *> ACUGs;
- const Type *SBP = Type::getInt8PtrTy(Context);
- for (SmallSetVector<Constant *,32>::iterator
- AI = AttributeCompilerUsedGlobals.begin(),
- AE = AttributeCompilerUsedGlobals.end(); AI != AE; ++AI) {
- Constant *C = *AI;
- ACUGs.push_back(TheFolder->CreateBitCast(C, SBP));
- }
-
- ArrayType *AT = ArrayType::get(SBP, ACUGs.size());
- Constant *Init = ConstantArray::get(AT, ACUGs);
- GlobalValue *gv = new GlobalVariable(*TheModule, AT, false,
- GlobalValue::AppendingLinkage, Init,
- "llvm.compiler.used");
- gv->setSection("llvm.metadata");
- AttributeCompilerUsedGlobals.clear();
- }
-
- // Add llvm.global.annotations
- if (!AttributeAnnotateGlobals.empty()) {
- Constant *Array = ConstantArray::get(
- ArrayType::get(AttributeAnnotateGlobals[0]->getType(),
- AttributeAnnotateGlobals.size()),
- AttributeAnnotateGlobals);
- GlobalValue *gv = new GlobalVariable(*TheModule, Array->getType(), false,
- GlobalValue::AppendingLinkage, Array,
- "llvm.global.annotations");
- gv->setSection("llvm.metadata");
- AttributeAnnotateGlobals.clear();
- }
-
- // Finish off the per-function pass.
- if (PerFunctionPasses)
- PerFunctionPasses->doFinalization();
-
-//TODO // Emit intermediate file before module level optimization passes are run.
-//TODO if (flag_debug_llvm_module_opt) {
-//TODO
-//TODO static PassManager *IntermediatePM = new PassManager();
-//TODO IntermediatePM->add(new TargetData(*TheTarget->getTargetData()));
-//TODO
-//TODO char asm_intermediate_out_filename[MAXPATHLEN];
-//TODO strcpy(&asm_intermediate_out_filename[0], llvm_asm_file_name);
-//TODO strcat(&asm_intermediate_out_filename[0],".0");
-//TODO FILE *asm_intermediate_out_file = fopen(asm_intermediate_out_filename, "w+b");
-//TODO AsmIntermediateOutStream = new oFILEstream(asm_intermediate_out_file);
-//TODO raw_ostream *AsmIntermediateRawOutStream =
-//TODO new raw_os_ostream(*AsmIntermediateOutStream);
-//TODO if (EmitIR && 0)
-//TODO IntermediatePM->add(createBitcodeWriterPass(*AsmIntermediateOutStream));
-//TODO if (EmitIR)
-//TODO IntermediatePM->add(createPrintModulePass(AsmIntermediateRawOutStream));
-//TODO IntermediatePM->run(*TheModule);
-//TODO AsmIntermediateRawOutStream->flush();
-//TODO delete AsmIntermediateRawOutStream;
-//TODO AsmIntermediateRawOutStream = 0;
-//TODO AsmIntermediateOutStream->flush();
-//TODO fflush(asm_intermediate_out_file);
-//TODO delete AsmIntermediateOutStream;
-//TODO AsmIntermediateOutStream = 0;
-//TODO }
-
- // Run module-level optimizers, if any are present.
- createPerModuleOptimizationPasses();
- if (PerModulePasses)
- PerModulePasses->run(*TheModule);
-
- // Run the code generator, if present.
- if (CodeGenPasses) {
- CodeGenPasses->doInitialization();
- for (Module::iterator I = TheModule->begin(), E = TheModule->end();
- I != E; ++I)
- if (!I->isDeclaration())
- CodeGenPasses->run(*I);
- CodeGenPasses->doFinalization();
- }
-
- FormattedOutStream.flush();
- OutStream->flush();
-//TODO delete AsmOutRawStream;
-//TODO AsmOutRawStream = 0;
-//TODO delete AsmOutStream;
-//TODO AsmOutStream = 0;
-//TODO timevar_pop(TV_LLVM_PERFILE);
-
- // We have finished - shutdown the plugin. Doing this here ensures that timer
- // info and other statistics are not intermingled with those produced by GCC.
- FinalizePlugin();
-}
-
-
-/// gate_null - Gate method for a pass that does nothing.
-static bool gate_null (void) {
- return false;
-}
-
-/// pass_gimple_null - Gimple pass that does nothing.
-static struct gimple_opt_pass pass_gimple_null =
-{
- {
- GIMPLE_PASS,
- "*gimple_null", /* name */
- gate_null, /* gate */
- NULL, /* execute */
- NULL, /* sub */
- NULL, /* next */
- 0, /* static_pass_number */
- TV_NONE, /* tv_id */
- 0, /* properties_required */
- 0, /* properties_provided */
- 0, /* properties_destroyed */
- 0, /* todo_flags_start */
- 0 /* todo_flags_finish */
- }
-};
-
-/// execute_correct_state - Correct the cgraph state to ensure that newly
-/// inserted functions are processed before being converted to LLVM IR.
-static unsigned int execute_correct_state (void) {
- if (cgraph_state < CGRAPH_STATE_IPA_SSA)
- cgraph_state = CGRAPH_STATE_IPA_SSA;
- return 0;
-}
-
-/// gate_correct_state - Gate method for pass_gimple_correct_state.
-static bool gate_correct_state (void) {
- return true;
-}
-
-/// pass_gimple_correct_state - Gimple pass that corrects the cgraph state so
-/// newly inserted functions are processed before being converted to LLVM IR.
-static struct gimple_opt_pass pass_gimple_correct_state =
-{
- {
- GIMPLE_PASS,
- "*gimple_correct_state", /* name */
- gate_correct_state, /* gate */
- execute_correct_state, /* execute */
- NULL, /* sub */
- NULL, /* next */
- 0, /* static_pass_number */
- TV_NONE, /* tv_id */
- 0, /* properties_required */
- 0, /* properties_provided */
- 0, /* properties_destroyed */
- 0, /* todo_flags_start */
- 0 /* todo_flags_finish */
- }
-};
-
-/// pass_ipa_null - IPA pass that does nothing.
-static struct ipa_opt_pass_d pass_ipa_null = {
- {
- IPA_PASS,
- "*ipa_null", /* name */
- gate_null, /* gate */
- NULL, /* execute */
- NULL, /* sub */
- NULL, /* next */
- 0, /* static_pass_number */
- TV_NONE, /* tv_id */
- 0, /* properties_required */
- 0, /* properties_provided */
- 0, /* properties_destroyed */
- 0, /* todo_flags_start */
- 0 /* todo_flags_finish */
- },
- NULL, /* generate_summary */
- NULL, /* write_summary */
- NULL, /* read_summary */
- NULL, /* function_read_summary */
- NULL, /* stmt_fixup */
- 0, /* TODOs */
- NULL, /* function_transform */
- NULL /* variable_transform */
-};
-
-/// pass_rtl_null - RTL pass that does nothing.
-static struct rtl_opt_pass pass_rtl_null =
-{
- {
- RTL_PASS,
- "*rtl_null", /* name */
- gate_null, /* gate */
- NULL, /* execute */
- NULL, /* sub */
- NULL, /* next */
- 0, /* static_pass_number */
- TV_NONE, /* tv_id */
- 0, /* properties_required */
- 0, /* properties_provided */
- 0, /* properties_destroyed */
- 0, /* todo_flags_start */
- 0 /* todo_flags_finish */
- }
-};
-
-/// pass_simple_ipa_null - Simple IPA pass that does nothing.
-static struct simple_ipa_opt_pass pass_simple_ipa_null =
-{
- {
- SIMPLE_IPA_PASS,
- "*simple_ipa_null", /* name */
- gate_null, /* gate */
- NULL, /* execute */
- NULL, /* sub */
- NULL, /* next */
- 0, /* static_pass_number */
- TV_NONE, /* tv_id */
- 0, /* properties_required */
- 0, /* properties_provided */
- 0, /* properties_destroyed */
- 0, /* todo_flags_start */
- 0 /* todo_flags_finish */
- }
-};
-
-
-// Garbage collector roots.
-extern const struct ggc_cache_tab gt_ggc_rc__gt_llvm_cache_h[];
-
-
-/// PluginFlags - Flag arguments for the plugin.
-
-struct FlagDescriptor {
- const char *Key; // The plugin argument is -fplugin-arg-llvm-KEY.
- bool *Flag; // Set to true if the flag is seen.
-};
-
-static FlagDescriptor PluginFlags[] = {
- { "debug-pass-structure", &DebugPassStructure},
- { "debug-pass-arguments", &DebugPassArguments},
- { "disable-llvm-optzns", &DisableLLVMOptimizations },
- { "enable-gcc-optzns", &EnableGCCOptimizations },
- { "emit-ir", &EmitIR },
- { "save-gcc-output", &SaveGCCOutput },
- { NULL, NULL } // Terminator.
-};
-
-
-/// llvm_plugin_info - Information about this plugin. Users can access this
-/// using "gcc --help -v".
-static struct plugin_info llvm_plugin_info = {
- REVISION, // version
- // TODO provide something useful here
- NULL // help
-};
-
-static bool version_check(struct plugin_gcc_version *gcc_version,
- struct plugin_gcc_version *plugin_version) {
- // Make it possible to turn off the version check - useful for testing gcc
- // bootstrap.
- if (getenv("dragonegg_disable_version_check"))
- return true;
-
- // Check that the running gcc has exactly the same version as the gcc we were
- // built against. This strict check seems wise when developing against a fast
- // moving gcc tree. TODO: Use a milder check if doing a "release build".
- return plugin_default_version_check (gcc_version, plugin_version);
-}
-
-
-/// plugin_init - Plugin initialization routine, called by GCC. This is the
-/// first code executed in the plugin (except for constructors). Configure
-/// the plugin and setup GCC, taking over optimization and code generation.
-int __attribute__ ((visibility("default")))
-plugin_init(struct plugin_name_args *plugin_info,
- struct plugin_gcc_version *version) {
- const char *plugin_name = plugin_info->base_name;
- struct register_pass_info pass_info;
-
- // Check that the plugin is compatible with the running gcc.
- if (!version_check (&gcc_version, version)) {
- errs() << "Incompatible plugin version\n";
- return 1;
- }
-
- // Provide GCC with our version and help information.
- register_callback (plugin_name, PLUGIN_INFO, NULL, &llvm_plugin_info);
-
- // Process any plugin arguments.
- {
- struct plugin_argument *argv = plugin_info->argv;
- int argc = plugin_info->argc;
-
- for (int i = 0; i < argc; ++i) {
- bool Found = false;
-
- // Look for a matching flag.
- for (FlagDescriptor *F = PluginFlags; F->Key; ++F) {
- if (strcmp (argv[i].key, F->Key))
- continue;
-
- if (argv[i].value)
- warning (0, G_("option '-fplugin-arg-%s-%s=%s' ignored"
- " (superfluous '=%s')"),
- plugin_name, argv[i].key, argv[i].value, argv[i].value);
- else
- *F->Flag = true;
-
- Found = true;
- break;
- }
-
- if (!Found)
- warning (0, G_("plugin %qs: unrecognized argument %qs ignored"),
- plugin_name, argv[i].key);
- }
- }
-
- // Obtain exclusive use of the assembly code output file. This stops GCC from
- // writing anything at all to the assembly file - only we get to write to it.
- TakeoverAsmOutput();
-
- // Register our garbage collector roots.
- register_callback (plugin_name, PLUGIN_REGISTER_GGC_CACHES, NULL,
- (void *)gt_ggc_rc__gt_llvm_cache_h);
-
- // Perform late initialization just before processing the compilation unit.
- register_callback (plugin_name, PLUGIN_START_UNIT, llvm_start_unit, NULL);
-
- // Turn off all gcc optimization passes.
- if (!EnableGCCOptimizations) {
- // TODO: figure out a good way of turning off ipa optimization passes.
- // Could just set optimize to zero (after taking a copy), but this would
- // also impact front-end optimizations.
-
- // Leave pass_inline_parameters. Otherwise our vector lowering fails since
- // immediates have not been propagated into builtin callsites.
-
- // Leave pass_ipa_function_and_variable_visibility. Needed for correctness.
-
- // Turn off pass_ipa_early_inline.
- pass_info.pass = &pass_simple_ipa_null.pass;
- pass_info.reference_pass_name = "einline_ipa";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
-
- // Leave pass_ipa_free_lang_data.
-
- // Leave pass pass_early_local_passes::pass_fixup_cfg. ???
-
- // Leave pass pass_early_local_passes::pass_tree_profile.
-
- // Leave pass_early_local_passes::pass_cleanup_cfg. ???
-
- // Leave pass_early_local_passes::pass_init_datastructures. ???
-
- // Leave pass_early_local_passes::pass_expand_omp.
-
- // Leave pass_early_local_passes::pass_referenced_vars. ???
-
- // Leave pass_early_local_passes::pass_build_ssa.
-
- // Leave pass_early_local_passes::pass_early_warn_uninitialized.
-
- // Leave pass_early_local_passes::pass_rebuild_cgraph_edges. ???
-
- // Leave pass_early_local_passes::pass_early_inline. Otherwise our vector
- // lowering fails since immediates have not been propagated into builtin
- // callsites.
-
- // Insert a pass that ensures that any newly inserted functions, for example
- // those generated by OMP expansion, are processed before being converted to
- // LLVM IR.
- pass_info.pass = &pass_gimple_correct_state.pass;
- pass_info.reference_pass_name = "early_optimizations";
- pass_info.ref_pass_instance_number = 1;
- pass_info.pos_op = PASS_POS_INSERT_BEFORE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
-
- // Turn off pass_early_local_passes::pass_all_early_optimizations.
- pass_info.pass = &pass_gimple_null.pass;
- pass_info.reference_pass_name = "early_optimizations";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
-
- // Leave pass_early_local_passes::pass_release_ssa_names. ???
-
- // Leave pass_early_local_passes::pass_rebuild_cgraph_edges. ???
-
- // Leave pass_inline_parameters. Otherwise our vector lowering fails since
- // immediates have not been propagated into builtin callsites.
-
- // Turn off pass_ipa_increase_alignment.
- pass_info.pass = &pass_simple_ipa_null.pass;
- pass_info.reference_pass_name = "increase_alignment";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
-
- // Turn off pass_ipa_matrix_reorg.
- pass_info.pass = &pass_simple_ipa_null.pass;
- pass_info.reference_pass_name = "matrix-reorg";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
-
- // Leave pass_ipa_whole_program_visibility. ???
-
- // Turn off pass_ipa_cp.
- pass_info.pass = &pass_ipa_null.pass;
- pass_info.reference_pass_name = "cp";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
-
- // Turn off pass_ipa_inline.
- pass_info.pass = &pass_ipa_null.pass;
- pass_info.reference_pass_name = "inline";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
-
- // Turn off pass_ipa_reference.
- pass_info.pass = &pass_ipa_null.pass;
- pass_info.reference_pass_name = "static-var";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
-
- // Turn off pass_ipa_pure_const.
- pass_info.pass = &pass_ipa_null.pass;
- pass_info.reference_pass_name = "pure-const";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
-
- // Turn off pass_ipa_type_escape.
- pass_info.pass = &pass_simple_ipa_null.pass;
- pass_info.reference_pass_name = "type-escape-var";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
-
- // Turn off pass_ipa_pta.
- pass_info.pass = &pass_simple_ipa_null.pass;
- pass_info.reference_pass_name = "pta";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
-
- // Turn off pass_ipa_struct_reorg.
- pass_info.pass = &pass_simple_ipa_null.pass;
- pass_info.reference_pass_name = "ipa_struct_reorg";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
- }
-
- // Replace the LTO gimple pass. If GCC optimizations are disabled then this
- // is where functions are converted to LLVM IR. When GCC optimizations are
- // enabled then only aliases and thunks are output here, with functions being
- // converted later after all tree optimizers have run.
- pass_info.pass = &pass_emit_functions.pass;
- pass_info.reference_pass_name = "lto_gimple_out";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
-
- // Replace the LTO decls pass with conversion of global variables to LLVM IR.
- pass_info.pass = &pass_emit_variables.pass;
- pass_info.reference_pass_name = "lto_decls_out";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
-
-#if (GCC_MINOR < 6)
- // Disable any other LTO passes.
- pass_info.pass = &pass_ipa_null.pass;
- pass_info.reference_pass_name = "lto_wpa_fixup";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
-#endif
-
- if (!EnableGCCOptimizations) {
- // Disable pass_lower_eh_dispatch, which runs after LLVM conversion.
- pass_info.pass = &pass_gimple_null.pass;
- pass_info.reference_pass_name = "ehdisp";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
-
- // Disable pass_all_optimizations, which runs after LLVM conversion.
- pass_info.pass = &pass_gimple_null.pass;
- pass_info.reference_pass_name = "*all_optimizations";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
-
- // Disable pass_lower_complex_O0, which runs after LLVM conversion.
- pass_info.pass = &pass_gimple_null.pass;
- pass_info.reference_pass_name = "cplxlower0";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
-
- // Disable pass_cleanup_eh, which runs after LLVM conversion.
- pass_info.pass = &pass_gimple_null.pass;
- pass_info.reference_pass_name = "ehcleanup";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
-
- // Disable pass_lower_resx, which runs after LLVM conversion.
- pass_info.pass = &pass_gimple_null.pass;
- pass_info.reference_pass_name = "resx";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
-
- // Disable pass_nrv, which runs after LLVM conversion.
- pass_info.pass = &pass_gimple_null.pass;
- pass_info.reference_pass_name = "nrv";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
-
- // Disable pass_mudflap_2, which runs after LLVM conversion.
- pass_info.pass = &pass_gimple_null.pass;
- pass_info.reference_pass_name = "mudflap2";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
-
- // Disable pass_cleanup_cfg_post_optimizing, which runs after LLVM conversion.
- pass_info.pass = &pass_gimple_null.pass;
- pass_info.reference_pass_name = "optimized";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
-
- // TODO: Disable pass_warn_function_noreturn?
- }
-
- // Replace rtl expansion.
- if (!EnableGCCOptimizations) {
- // Replace rtl expansion with a pass that pretends to codegen functions, but
- // actually only does the hoop jumping that GCC requires at this point.
- pass_info.pass = &pass_disable_rtl.pass;
- pass_info.reference_pass_name = "expand";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
- } else {
- // Replace rtl expansion with a pass that converts functions to LLVM IR.
- pass_info.pass = &pass_rtl_emit_function.pass;
- pass_info.reference_pass_name = "expand";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
- }
-
- // Turn off all other rtl passes.
- pass_info.pass = &pass_gimple_null.pass;
- pass_info.reference_pass_name = "*rest_of_compilation";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
-
- pass_info.pass = &pass_rtl_null.pass;
- pass_info.reference_pass_name = "*clean_state";
- pass_info.ref_pass_instance_number = 0;
- pass_info.pos_op = PASS_POS_REPLACE;
- register_callback (plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &pass_info);
-
- // Finish the .s file once the compilation unit has been completely processed.
- register_callback (plugin_name, PLUGIN_FINISH_UNIT, llvm_finish_unit, NULL);
-
- // Run shutdown code when GCC exits.
- register_callback (plugin_name, PLUGIN_FINISH, llvm_finish, NULL);
-
- return 0;
-}
Removed: dragonegg/trunk/llvm-cache.c
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/llvm-cache.c?rev=127407&view=auto
==============================================================================
--- dragonegg/trunk/llvm-cache.c (original)
+++ dragonegg/trunk/llvm-cache.c (removed)
@@ -1,142 +0,0 @@
-/* Caching values "in" trees
-Copyright (C) 2005, 2006, 2007 Free Software Foundation, Inc.
-
-This file is part of GCC.
-
-GCC is free software; you can redistribute it and/or modify it under
-the terms of the GNU General Public License as published by the Free
-Software Foundation; either version 2, or (at your option) any later
-version.
-
-GCC is distributed in the hope that it will be useful, but WITHOUT ANY
-WARRANTY; without even the implied warranty of MERCHANTABILITY or
-FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
-for more details.
-
-You should have received a copy of the GNU General Public License
-along with GCC; see the file COPYING. If not, write to the Free
-Software Foundation, 59 Temple Place - Suite 330, Boston, MA
-02111-1307, USA. */
-
-/*===----------------------------------------------------------------------===
- This code lets you to associate a void* with a tree, as if it were cached
- inside the tree: if the tree is garbage collected and reallocated, then the
- cached value will have been cleared.
- ===----------------------------------------------------------------------===*/
-
-/* Plugin headers. */
-#include "llvm-cache.h"
-
-/* GCC headers. */
-#include "config.h"
-#include "system.h"
-#include "coretypes.h"
-#include "target.h"
-#include "tree.h"
-#include "ggc.h"
-
-struct GTY(()) tree_llvm_map {
- struct tree_map_base base;
- const void * GTY((skip)) val;
-};
-
-#define tree_llvm_map_eq tree_map_base_eq
-#define tree_llvm_map_hash tree_map_base_hash
-#define tree_llvm_map_marked_p tree_map_base_marked_p
-
-static GTY ((if_marked ("tree_llvm_map_marked_p"),
- param_is(struct tree_llvm_map)))
- htab_t llvm_cache;
-
-/* Garbage collector header. */
-#include "gt-llvm-cache.h"
-
-/* llvm_has_cached - Returns whether a value has been associated with the
- tree. */
-int llvm_has_cached(union tree_node *tree) {
- struct tree_map_base in;
-
- if (!llvm_cache)
- return false;
-
- in.from = tree;
- return htab_find(llvm_cache, &in) != NULL;
-}
-
-/* llvm_get_cached - Returns the value associated with the tree, or NULL. */
-const void *llvm_get_cached(union tree_node *tree) {
- struct tree_llvm_map *h;
- struct tree_map_base in;
-
- if (!llvm_cache)
- return NULL;
-
- in.from = tree;
- h = (struct tree_llvm_map *) htab_find(llvm_cache, &in);
- return h ? h->val : NULL;
-}
-
-/* llvm_set_cached - Associates the given value with the tree (and returns it).
- To delete an association, pass a NULL value here. */
-const void *llvm_set_cached(union tree_node *tree, const void *val) {
- struct tree_llvm_map **slot;
- struct tree_map_base in;
-
- in.from = tree;
-
- /* If deleting, remove the slot. */
- if (val == NULL) {
- if (llvm_cache)
- htab_remove_elt(llvm_cache, &in);
- return NULL;
- }
-
- if (!llvm_cache)
- llvm_cache = htab_create_ggc(1024, tree_llvm_map_hash, tree_llvm_map_eq, NULL);
-
- slot = (struct tree_llvm_map **) htab_find_slot(llvm_cache, &in, INSERT);
- gcc_assert(slot);
-
- if (!*slot) {
- *slot = GGC_NEW(struct tree_llvm_map);
- (*slot)->base.from = tree;
- }
-
- (*slot)->val = val;
-
- return val;
-}
-
-struct update {
- const void *old_val;
- const void *new_val;
-};
-
-/* replace - If the current value for the slot matches old_val, then replace
- it with new_val, or delete it if new_val is NULL. */
-static int replace(void **slot, void *data) {
- struct tree_llvm_map *entry = *(struct tree_llvm_map **)slot;
- struct update *u = (struct update *)data;
-
- if (entry->val != u->old_val)
- return 1;
-
- if (u->new_val != NULL)
- entry->val = u->new_val;
- else
- htab_clear_slot(llvm_cache, slot);
-
- return 1;
-}
-
-/* llvm_replace_cached - Replaces all occurrences of old_val with new_val. */
-void llvm_replace_cached(const void *old_val, const void *new_val) {
- struct update u;
- u.old_val = old_val;
- u.new_val = new_val;
-
- if (!llvm_cache || old_val == NULL)
- return;
-
- htab_traverse(llvm_cache, replace, &u);
-}
Removed: dragonegg/trunk/llvm-cache.h
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/llvm-cache.h?rev=127407&view=auto
==============================================================================
--- dragonegg/trunk/llvm-cache.h (original)
+++ dragonegg/trunk/llvm-cache.h (removed)
@@ -1,45 +0,0 @@
-/*===-------- llvm-cache.h - Caching values "in" GCC trees --------*- C -*-===*\
-|* *|
-|* Copyright (C) 2009, 2010, 2011 Duncan Sands. *|
-|* *|
-|* This file is part of DragonEgg. *|
-|* *|
-|* DragonEgg is free software; you can redistribute it and/or modify it under *|
-|* the terms of the GNU General Public License as published by the Free *|
-|* Software Foundation; either version 2, or (at your option) any later *|
-|* version. *|
-|* *|
-|* DragonEgg is distributed in the hope that it will be useful, but WITHOUT *|
-|* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or *|
-|* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for *|
-|* more details. *|
-|* You should have received a copy of the GNU General Public License along *|
-|* with DragonEgg; see the file COPYING. If not, write to the Free Software *|
-|* Foundation, 51 Franklin Street, Suite 500, Boston, MA 02110-1335, USA. *|
-|* *|
-|*===----------------------------------------------------------------------===*|
-|* This code lets you to associate a void* with a tree, as if it were cached *|
-|* inside the tree: if the tree is garbage collected and reallocated, then *|
-|* the cached value will have been cleared. *|
-\*===----------------------------------------------------------------------===*/
-
-#ifndef LLVM_CACHE_H
-#define LLVM_CACHE_H
-
-union tree_node;
-
-/* llvm_has_cached - Returns whether a value has been associated with the
- tree. */
-extern int llvm_has_cached(union tree_node *tree);
-
-/* llvm_get_cached - Returns the value associated with the tree, or NULL. */
-extern const void *llvm_get_cached(union tree_node *tree);
-
-/* llvm_set_cached - Associates the given value with the tree (and returns it).
- To delete an association, pass NULL for the value. */
-extern const void *llvm_set_cached(union tree_node *tree, const void *val);
-
-/* llvm_replace_cached - Replaces all occurrences of old_val with new_val. */
-extern void llvm_replace_cached(const void *old_val, const void *new_val);
-
-#endif /* LLVM_CACHE_H */
Removed: dragonegg/trunk/llvm-constant.cpp
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/llvm-constant.cpp?rev=127407&view=auto
==============================================================================
--- dragonegg/trunk/llvm-constant.cpp (original)
+++ dragonegg/trunk/llvm-constant.cpp (removed)
@@ -1,1229 +0,0 @@
-//===----- llvm-constant.cpp - Converting and working with constants ------===//
-//
-// Copyright (C) 2011 Duncan Sands
-//
-// This file is part of DragonEgg.
-//
-// DragonEgg is free software; you can redistribute it and/or modify it under
-// the terms of the GNU General Public License as published by the Free Software
-// Foundation; either version 2, or (at your option) any later version.
-//
-// DragonEgg is distributed in the hope that it will be useful, but WITHOUT ANY
-// WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
-// A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-//
-// You should have received a copy of the GNU General Public License along with
-// DragonEgg; see the file COPYING. If not, write to the Free Software
-// Foundation, 51 Franklin Street, Suite 500, Boston, MA 02110-1335, USA.
-//
-//===----------------------------------------------------------------------===//
-// This is the code that converts GCC constants to LLVM.
-//===----------------------------------------------------------------------===//
-
-// Plugin headers
-#include "llvm-constant.h"
-#include "llvm-internal.h"
-#include "llvm-tree.h"
-
-// LLVM headers
-#include "llvm/GlobalVariable.h"
-#include "llvm/LLVMContext.h"
-#include "llvm/Support/Host.h"
-#include "llvm/Target/TargetData.h"
-
-// System headers
-#include <gmp.h>
-#include <map>
-
-// GCC headers
-extern "C" {
-#include "config.h"
-// Stop GCC declaring 'getopt' as it can clash with the system's declaration.
-#undef HAVE_DECL_GETOPT
-#include "system.h"
-#include "coretypes.h"
-#include "target.h"
-#include "tree.h"
-
-#include "flags.h"
-#include "tm_p.h"
-}
-
-static LLVMContext &Context = getGlobalContext();
-
-/// EncodeExpr - Write the given expression into Buffer as it would appear in
-/// memory on the target (the buffer is resized to contain exactly the bytes
-/// written). Return the number of bytes written; this can also be obtained
-/// by querying the buffer's size.
-/// The following kinds of expressions are currently supported: INTEGER_CST,
-/// REAL_CST, COMPLEX_CST, VECTOR_CST, STRING_CST.
-static unsigned EncodeExpr(tree exp, SmallVectorImpl<unsigned char> &Buffer) {
- const tree type = TREE_TYPE(exp);
- unsigned SizeInBytes = (TREE_INT_CST_LOW(TYPE_SIZE(type)) + 7) / 8;
- Buffer.resize(SizeInBytes);
- unsigned BytesWritten = native_encode_expr(exp, &Buffer[0], SizeInBytes);
- assert(BytesWritten == SizeInBytes && "Failed to fully encode expression!");
- return BytesWritten;
-}
-
-static Constant *ConvertINTEGER_CST(tree exp) {
- const Type *Ty = ConvertType(TREE_TYPE(exp));
-
- // Handle i128 specially.
- if (const IntegerType *IT = dyn_cast<IntegerType>(Ty)) {
- if (IT->getBitWidth() == 128) {
- // GCC only supports i128 on 64-bit systems.
- assert(HOST_BITS_PER_WIDE_INT == 64 &&
- "i128 only supported on 64-bit system");
- uint64_t Bits[] = { TREE_INT_CST_LOW(exp), TREE_INT_CST_HIGH(exp) };
- return ConstantInt::get(Context, APInt(128, 2, Bits));
- }
- }
-
- // Build the value as a ulong constant, then constant fold it to the right
- // type. This handles overflow and other things appropriately.
- uint64_t IntValue = getINTEGER_CSTVal(exp);
- ConstantInt *C = ConstantInt::get(Type::getInt64Ty(Context), IntValue);
- // The destination type can be a pointer, integer or floating point
- // so we need a generalized cast here
- Instruction::CastOps opcode = CastInst::getCastOpcode(C, false, Ty,
- !TYPE_UNSIGNED(TREE_TYPE(exp)));
- return TheFolder->CreateCast(opcode, C, Ty);
-}
-
-static Constant *ConvertREAL_CST(tree exp) {
- // TODO: Test new implementation on a big-endian machine.
-
- // Encode the constant in Buffer in target format.
- SmallVector<unsigned char, 16> Buffer;
- EncodeExpr(exp, Buffer);
-
- // Discard any alignment padding, which we assume comes at the end.
- unsigned Precision = TYPE_PRECISION(TREE_TYPE(exp));
- assert((Precision & 7) == 0 && "Unsupported real number precision!");
- Buffer.resize(Precision / 8);
-
- // We are going to view the buffer as an array of APInt words. Ensure that
- // the buffer contains a whole number of words by extending it if necessary.
- unsigned Words = (Precision + integerPartWidth - 1) / integerPartWidth;
- // On a little-endian machine extend the buffer by adding bytes to the end.
- Buffer.resize(Words * (integerPartWidth / 8));
- // On a big-endian machine extend the buffer by adding bytes to the beginning.
- if (BYTES_BIG_ENDIAN)
- std::copy_backward(Buffer.begin(), Buffer.begin() + Precision / 8,
- Buffer.end());
-
- // Ensure that the least significant word comes first: we are going to make an
- // APInt, and the APInt constructor wants the least significant word first.
- integerPart *Parts = (integerPart *)&Buffer[0];
- if (BYTES_BIG_ENDIAN)
- std::reverse(Parts, Parts + Words);
-
- bool isPPC_FP128 = ConvertType(TREE_TYPE(exp))->isPPC_FP128Ty();
- if (isPPC_FP128) {
- // This type is actually a pair of doubles in disguise. They turn up the
- // wrong way round here, so flip them.
- assert(FLOAT_WORDS_BIG_ENDIAN && "PPC not big endian!");
- assert(Words == 2 && Precision == 128 && "Strange size for PPC_FP128!");
- std::swap(Parts[0], Parts[1]);
- }
-
- // Form an APInt from the buffer, an APFloat from the APInt, and the desired
- // floating point constant from the APFloat, phew!
- const APInt &I = APInt(Precision, Words, Parts);
- return ConstantFP::get(Context, APFloat(I, !isPPC_FP128));
-}
-
-static Constant *ConvertVECTOR_CST(tree exp) {
- if (!TREE_VECTOR_CST_ELTS(exp))
- return Constant::getNullValue(ConvertType(TREE_TYPE(exp)));
-
- std::vector<Constant*> Elts;
- for (tree elt = TREE_VECTOR_CST_ELTS(exp); elt; elt = TREE_CHAIN(elt))
- Elts.push_back(ConvertConstant(TREE_VALUE(elt)));
-
- // The vector should be zero filled if insufficient elements are provided.
- if (Elts.size() < TYPE_VECTOR_SUBPARTS(TREE_TYPE(exp))) {
- tree EltType = TREE_TYPE(TREE_TYPE(exp));
- Constant *Zero = Constant::getNullValue(ConvertType(EltType));
- while (Elts.size() < TYPE_VECTOR_SUBPARTS(TREE_TYPE(exp)))
- Elts.push_back(Zero);
- }
-
- return ConstantVector::get(Elts);
-}
-
-static Constant *ConvertSTRING_CST(tree exp) {
- const ArrayType *StrTy = cast<ArrayType>(ConvertType(TREE_TYPE(exp)));
- const Type *ElTy = StrTy->getElementType();
-
- unsigned Len = (unsigned)TREE_STRING_LENGTH(exp);
-
- std::vector<Constant*> Elts;
- if (ElTy->isIntegerTy(8)) {
- const unsigned char *InStr =(const unsigned char *)TREE_STRING_POINTER(exp);
- for (unsigned i = 0; i != Len; ++i)
- Elts.push_back(ConstantInt::get(Type::getInt8Ty(Context), InStr[i]));
- } else if (ElTy->isIntegerTy(16)) {
- assert((Len&1) == 0 &&
- "Length in bytes should be a multiple of element size");
- const uint16_t *InStr =
- (const unsigned short *)TREE_STRING_POINTER(exp);
- for (unsigned i = 0; i != Len/2; ++i) {
- // gcc has constructed the initializer elements in the target endianness,
- // but we're going to treat them as ordinary shorts from here, with
- // host endianness. Adjust if necessary.
- if (llvm::sys::isBigEndianHost() == BYTES_BIG_ENDIAN)
- Elts.push_back(ConstantInt::get(Type::getInt16Ty(Context), InStr[i]));
- else
- Elts.push_back(ConstantInt::get(Type::getInt16Ty(Context),
- ByteSwap_16(InStr[i])));
- }
- } else if (ElTy->isIntegerTy(32)) {
- assert((Len&3) == 0 &&
- "Length in bytes should be a multiple of element size");
- const uint32_t *InStr = (const uint32_t *)TREE_STRING_POINTER(exp);
- for (unsigned i = 0; i != Len/4; ++i) {
- // gcc has constructed the initializer elements in the target endianness,
- // but we're going to treat them as ordinary ints from here, with
- // host endianness. Adjust if necessary.
- if (llvm::sys::isBigEndianHost() == BYTES_BIG_ENDIAN)
- Elts.push_back(ConstantInt::get(Type::getInt32Ty(Context), InStr[i]));
- else
- Elts.push_back(ConstantInt::get(Type::getInt32Ty(Context),
- ByteSwap_32(InStr[i])));
- }
- } else {
- assert(0 && "Unknown character type!");
- }
-
- unsigned LenInElts = Len /
- TREE_INT_CST_LOW(TYPE_SIZE_UNIT(TREE_TYPE(TREE_TYPE(exp))));
- unsigned ConstantSize = StrTy->getNumElements();
-
- if (LenInElts != ConstantSize) {
- // If this is a variable sized array type, set the length to LenInElts.
- if (ConstantSize == 0) {
- tree Domain = TYPE_DOMAIN(TREE_TYPE(exp));
- if (!Domain || !TYPE_MAX_VALUE(Domain)) {
- ConstantSize = LenInElts;
- StrTy = ArrayType::get(ElTy, LenInElts);
- }
- }
-
- if (ConstantSize < LenInElts) {
- // Only some chars are being used, truncate the string: char X[2] = "foo";
- Elts.resize(ConstantSize);
- } else {
- // Fill the end of the string with nulls.
- Constant *C = Constant::getNullValue(ElTy);
- for (; LenInElts != ConstantSize; ++LenInElts)
- Elts.push_back(C);
- }
- }
- return ConstantArray::get(StrTy, Elts);
-}
-
-static Constant *ConvertCOMPLEX_CST(tree exp) {
- Constant *Elts[2] = {
- ConvertConstant(TREE_REALPART(exp)),
- ConvertConstant(TREE_IMAGPART(exp))
- };
- return ConstantStruct::get(Context, Elts, 2, false);
-}
-
-static Constant *ConvertNOP_EXPR(tree exp) {
- Constant *Elt = ConvertConstant(TREE_OPERAND(exp, 0));
- const Type *Ty = ConvertType(TREE_TYPE(exp));
- bool EltIsSigned = !TYPE_UNSIGNED(TREE_TYPE(TREE_OPERAND(exp, 0)));
- bool TyIsSigned = !TYPE_UNSIGNED(TREE_TYPE(exp));
-
- // If this is a structure-to-structure cast, just return the uncasted value.
- if (!Elt->getType()->isSingleValueType() || !Ty->isSingleValueType())
- return Elt;
-
- // Elt and Ty can be integer, float or pointer here: need generalized cast
- Instruction::CastOps opcode = CastInst::getCastOpcode(Elt, EltIsSigned,
- Ty, TyIsSigned);
- return TheFolder->CreateCast(opcode, Elt, Ty);
-}
-
-static Constant *ConvertCONVERT_EXPR(tree exp) {
- Constant *Elt = ConvertConstant(TREE_OPERAND(exp, 0));
- bool EltIsSigned = !TYPE_UNSIGNED(TREE_TYPE(TREE_OPERAND(exp, 0)));
- const Type *Ty = ConvertType(TREE_TYPE(exp));
- bool TyIsSigned = !TYPE_UNSIGNED(TREE_TYPE(exp));
- Instruction::CastOps opcode = CastInst::getCastOpcode(Elt, EltIsSigned, Ty,
- TyIsSigned);
- return TheFolder->CreateCast(opcode, Elt, Ty);
-}
-
-static Constant *ConvertPOINTER_PLUS_EXPR(tree exp) {
- Constant *Ptr = ConvertConstant(TREE_OPERAND(exp, 0)); // The pointer.
- Constant *Idx = ConvertConstant(TREE_OPERAND(exp, 1)); // The offset in bytes.
-
- // Convert the pointer into an i8* and add the offset to it.
- Ptr = TheFolder->CreateBitCast(Ptr, Type::getInt8PtrTy(Context));
- Constant *GEP = POINTER_TYPE_OVERFLOW_UNDEFINED ?
- TheFolder->CreateInBoundsGetElementPtr(Ptr, &Idx, 1) :
- TheFolder->CreateGetElementPtr(Ptr, &Idx, 1);
-
- // The result may be of a different pointer type.
- return TheFolder->CreateBitCast(GEP, ConvertType(TREE_TYPE(exp)));
-}
-
-static Constant *ConvertBinOp_CST(tree exp) {
- Constant *LHS = ConvertConstant(TREE_OPERAND(exp, 0));
- bool LHSIsSigned = !TYPE_UNSIGNED(TREE_TYPE(TREE_OPERAND(exp,0)));
- Constant *RHS = ConvertConstant(TREE_OPERAND(exp, 1));
- bool RHSIsSigned = !TYPE_UNSIGNED(TREE_TYPE(TREE_OPERAND(exp,1)));
- Instruction::CastOps opcode;
- if (LHS->getType()->isPointerTy()) {
- const Type *IntPtrTy = getTargetData().getIntPtrType(Context);
- opcode = CastInst::getCastOpcode(LHS, LHSIsSigned, IntPtrTy, false);
- LHS = TheFolder->CreateCast(opcode, LHS, IntPtrTy);
- opcode = CastInst::getCastOpcode(RHS, RHSIsSigned, IntPtrTy, false);
- RHS = TheFolder->CreateCast(opcode, RHS, IntPtrTy);
- }
-
- Constant *Result;
- switch (TREE_CODE(exp)) {
- default: assert(0 && "Unexpected case!");
- case PLUS_EXPR: Result = TheFolder->CreateAdd(LHS, RHS); break;
- case MINUS_EXPR: Result = TheFolder->CreateSub(LHS, RHS); break;
- }
-
- const Type *Ty = ConvertType(TREE_TYPE(exp));
- bool TyIsSigned = !TYPE_UNSIGNED(TREE_TYPE(exp));
- opcode = CastInst::getCastOpcode(Result, LHSIsSigned, Ty, TyIsSigned);
- return TheFolder->CreateCast(opcode, Result, Ty);
-}
-
-static Constant *ConvertArrayCONSTRUCTOR(tree exp) {
- // Vectors are like arrays, but the domain is stored via an array
- // type indirectly.
-
- // If we have a lower bound for the range of the type, get it.
- tree init_type = TREE_TYPE(exp);
- tree min_element = size_zero_node;
- std::vector<Constant*> ResultElts;
-
- if (TREE_CODE(init_type) == VECTOR_TYPE) {
- ResultElts.resize(TYPE_VECTOR_SUBPARTS(init_type));
- } else {
- assert(TREE_CODE(init_type) == ARRAY_TYPE && "Unknown type for init");
- tree Domain = TYPE_DOMAIN(init_type);
- if (Domain && TYPE_MIN_VALUE(Domain))
- min_element = fold_convert(sizetype, TYPE_MIN_VALUE(Domain));
-
- if (Domain && TYPE_MAX_VALUE(Domain)) {
- tree max_element = fold_convert(sizetype, TYPE_MAX_VALUE(Domain));
- tree size = size_binop (MINUS_EXPR, max_element, min_element);
- size = size_binop (PLUS_EXPR, size, size_one_node);
-
- if (host_integerp(size, 1))
- ResultElts.resize(tree_low_cst(size, 1));
- }
- }
-
- unsigned NextFieldToFill = 0;
- unsigned HOST_WIDE_INT ix;
- tree elt_index, elt_value;
- Constant *SomeVal = 0;
- FOR_EACH_CONSTRUCTOR_ELT (CONSTRUCTOR_ELTS (exp), ix, elt_index, elt_value) {
- // Find and decode the constructor's value.
- Constant *Val = ConvertConstant(elt_value);
- SomeVal = Val;
-
- // Get the index position of the element within the array. Note that this
- // can be NULL_TREE, which means that it belongs in the next available slot.
- tree index = elt_index;
-
- // The first and last field to fill in, inclusive.
- unsigned FieldOffset, FieldLastOffset;
- if (index && TREE_CODE(index) == RANGE_EXPR) {
- tree first = fold_convert (sizetype, TREE_OPERAND(index, 0));
- tree last = fold_convert (sizetype, TREE_OPERAND(index, 1));
-
- first = size_binop (MINUS_EXPR, first, min_element);
- last = size_binop (MINUS_EXPR, last, min_element);
-
- assert(host_integerp(first, 1) && host_integerp(last, 1) &&
- "Unknown range_expr!");
- FieldOffset = tree_low_cst(first, 1);
- FieldLastOffset = tree_low_cst(last, 1);
- } else if (index) {
- index = size_binop (MINUS_EXPR, fold_convert (sizetype, index),
- min_element);
- assert(host_integerp(index, 1));
- FieldOffset = tree_low_cst(index, 1);
- FieldLastOffset = FieldOffset;
- } else {
- FieldOffset = NextFieldToFill;
- FieldLastOffset = FieldOffset;
- }
-
- // Process all of the elements in the range.
- for (--FieldOffset; FieldOffset != FieldLastOffset; ) {
- ++FieldOffset;
- if (FieldOffset == ResultElts.size())
- ResultElts.push_back(Val);
- else {
- if (FieldOffset >= ResultElts.size())
- ResultElts.resize(FieldOffset+1);
- ResultElts[FieldOffset] = Val;
- }
-
- NextFieldToFill = FieldOffset+1;
- }
- }
-
- // Zero length array.
- if (ResultElts.empty())
- return Constant::getNullValue(ConvertType(TREE_TYPE(exp)));
- assert(SomeVal && "If we had some initializer, we should have some value!");
-
- // Do a post-pass over all of the elements. We're taking care of two things
- // here:
- // #1. If any elements did not have initializers specified, provide them
- // with a null init.
- // #2. If any of the elements have different types, return a struct instead
- // of an array. This can occur in cases where we have an array of
- // unions, and the various unions had different pieces init'd.
- const Type *ElTy = SomeVal->getType();
- Constant *Filler = Constant::getNullValue(ElTy);
- bool AllEltsSameType = true;
- for (unsigned i = 0, e = ResultElts.size(); i != e; ++i) {
- if (ResultElts[i] == 0)
- ResultElts[i] = Filler;
- else if (ResultElts[i]->getType() != ElTy)
- AllEltsSameType = false;
- }
-
- if (TREE_CODE(init_type) == VECTOR_TYPE) {
- assert(AllEltsSameType && "Vector of heterogeneous element types?");
- return ConstantVector::get(ResultElts);
- }
-
- Constant *Res = AllEltsSameType ?
- ConstantArray::get(ArrayType::get(ElTy, ResultElts.size()), ResultElts) :
- ConstantStruct::get(Context, ResultElts, false);
-
- // If the array does not require extra padding, return it.
- const Type *InitType = ConvertType(init_type);
- uint64_t ExpectedBits = getTargetData().getTypeAllocSizeInBits(InitType);
- uint64_t FoundBits = getTargetData().getTypeAllocSizeInBits(Res->getType());
- // The initializer may be bigger than the type if init_type is variable sized
- // or has no size (in which case the size is determined by the initial value).
- if (ExpectedBits <= FoundBits)
- return Res;
-
- // Wrap the array in a struct with padding at the end.
- Constant *PadElts[2];
- PadElts[0] = Res;
- PadElts[1] = UndefValue::get(ArrayType::get(Type::getInt8Ty(Context),
- (ExpectedBits - FoundBits) / 8));
- return ConstantStruct::get(Context, PadElts, 2, false);
-}
-
-
-namespace {
-/// ConstantLayoutInfo - A helper class used by ConvertRecordCONSTRUCTOR to
-/// lay out struct inits.
-struct ConstantLayoutInfo {
- const TargetData &TD;
-
- /// ResultElts - The initializer elements so far.
- std::vector<Constant*> ResultElts;
-
- /// StructIsPacked - This is set to true if we find out that we have to emit
- /// the ConstantStruct as a Packed LLVM struct type (because the LLVM
- /// alignment rules would prevent laying out the struct correctly).
- bool StructIsPacked;
-
- /// NextFieldByteStart - This field indicates the *byte* that the next field
- /// will start at. Put another way, this is the size of the struct as
- /// currently laid out, but without any tail padding considered.
- uint64_t NextFieldByteStart;
-
- /// MaxLLVMFieldAlignment - This is the largest alignment of any IR field,
- /// which is the alignment that the ConstantStruct will get.
- unsigned MaxLLVMFieldAlignment;
-
-
- ConstantLayoutInfo(const TargetData &TD) : TD(TD) {
- StructIsPacked = false;
- NextFieldByteStart = 0;
- MaxLLVMFieldAlignment = 1;
- }
-
- void ConvertToPacked();
- void AddFieldToRecordConstant(Constant *Val, uint64_t GCCFieldOffsetInBits);
- void AddBitFieldToRecordConstant(ConstantInt *Val,
- uint64_t GCCFieldOffsetInBits);
- void HandleTailPadding(uint64_t GCCStructBitSize);
-};
-
-}
-
-/// ConvertToPacked - Given a partially constructed initializer for a LLVM
-/// struct constant, change it to make all the implicit padding between elements
-/// be fully explicit.
-void ConstantLayoutInfo::ConvertToPacked() {
- assert(!StructIsPacked && "Struct is already packed");
- uint64_t EltOffs = 0;
- for (unsigned i = 0, e = ResultElts.size(); i != e; ++i) {
- Constant *Val = ResultElts[i];
-
- // Check to see if this element has an alignment that would cause it to get
- // offset. If so, insert explicit padding for the offset.
- unsigned ValAlign = TD.getABITypeAlignment(Val->getType());
- uint64_t AlignedEltOffs = TargetData::RoundUpAlignment(EltOffs, ValAlign);
-
- // If the alignment doesn't affect the element offset, then the value is ok.
- // Accept the field and keep moving.
- if (AlignedEltOffs == EltOffs) {
- EltOffs += TD.getTypeAllocSize(Val->getType());
- continue;
- }
-
- // Otherwise, there is padding here. Insert explicit zeros.
- const Type *PadTy = Type::getInt8Ty(Context);
- if (AlignedEltOffs-EltOffs != 1)
- PadTy = ArrayType::get(PadTy, AlignedEltOffs-EltOffs);
- ResultElts.insert(ResultElts.begin()+i,
- Constant::getNullValue(PadTy));
-
- // The padding is now element "i" and just bumped us up to "AlignedEltOffs".
- EltOffs = AlignedEltOffs;
- ++e; // One extra element to scan.
- }
-
- // Packed now!
- MaxLLVMFieldAlignment = 1;
- StructIsPacked = true;
-}
-
-
-/// AddFieldToRecordConstant - As ConvertRecordCONSTRUCTOR builds up an LLVM
-/// constant to represent a GCC CONSTRUCTOR node, it calls this method to add
-/// fields. The design of this is that it adds leading/trailing padding as
-/// needed to make the piece fit together and honor the GCC layout. This does
-/// not handle bitfields.
-///
-/// The arguments are:
-/// Val: The value to add to the struct, with a size that matches the size of
-/// the corresponding GCC field.
-/// GCCFieldOffsetInBits: The offset that we have to put Val in the result.
-///
-void ConstantLayoutInfo::
-AddFieldToRecordConstant(Constant *Val, uint64_t GCCFieldOffsetInBits) {
- // Figure out how to add this non-bitfield value to our constant struct so
- // that it ends up at the right offset. There are four cases we have to
- // think about:
- // 1. We may be able to just slap it onto the end of our struct and have
- // everything be ok.
- // 2. We may have to insert explicit padding into the LLVM struct to get
- // the initializer over into the right space. This is needed when the
- // GCC field has a larger alignment than the LLVM field.
- // 3. The LLVM field may be too far over and we may be forced to convert
- // this to an LLVM packed struct. This is required when the LLVM
- // alignment is larger than the GCC alignment.
- // 4. We may have a bitfield that needs to be merged into a previous
- // field.
- // Start by determining which case we have by looking at where LLVM and GCC
- // would place the field.
-
- // Verified that we haven't already laid out bytes that will overlap with
- // this new field.
- assert(NextFieldByteStart*8 <= GCCFieldOffsetInBits &&
- "Overlapping LLVM fields!");
-
- // Compute the offset the field would get if we just stuck 'Val' onto the
- // end of our structure right now. It is NextFieldByteStart rounded up to
- // the LLVM alignment of Val's type.
- unsigned ValLLVMAlign = 1;
-
- if (!StructIsPacked) { // Packed structs ignore the alignment of members.
- ValLLVMAlign = TD.getABITypeAlignment(Val->getType());
- MaxLLVMFieldAlignment = std::max(MaxLLVMFieldAlignment, ValLLVMAlign);
- }
-
- // LLVMNaturalByteOffset - This is where LLVM would drop the field if we
- // slap it onto the end of the struct.
- uint64_t LLVMNaturalByteOffset
- = TargetData::RoundUpAlignment(NextFieldByteStart, ValLLVMAlign);
-
- // If adding the LLVM field would push it over too far, then we must have a
- // case that requires the LLVM struct to be packed. Do it now if so.
- if (LLVMNaturalByteOffset*8 > GCCFieldOffsetInBits) {
- // Switch to packed.
- ConvertToPacked();
- assert(NextFieldByteStart*8 <= GCCFieldOffsetInBits &&
- "Packing didn't fix the problem!");
-
- // Recurse to add the field after converting to packed.
- return AddFieldToRecordConstant(Val, GCCFieldOffsetInBits);
- }
-
- // If the LLVM offset is not large enough, we need to insert explicit
- // padding in the LLVM struct between the fields.
- if (LLVMNaturalByteOffset*8 < GCCFieldOffsetInBits) {
- // Insert enough padding to fully fill in the hole. Insert padding from
- // NextFieldByteStart (not LLVMNaturalByteOffset) because the padding will
- // not get the same alignment as "Val".
- const Type *FillTy = Type::getInt8Ty(Context);
- if (GCCFieldOffsetInBits/8-NextFieldByteStart != 1)
- FillTy = ArrayType::get(FillTy,
- GCCFieldOffsetInBits/8-NextFieldByteStart);
- ResultElts.push_back(Constant::getNullValue(FillTy));
-
- NextFieldByteStart = GCCFieldOffsetInBits/8;
-
- // Recurse to add the field. This handles the case when the LLVM struct
- // needs to be converted to packed after inserting tail padding.
- return AddFieldToRecordConstant(Val, GCCFieldOffsetInBits);
- }
-
- // Slap 'Val' onto the end of our ConstantStruct, it must be known to land
- // at the right offset now.
- assert(LLVMNaturalByteOffset*8 == GCCFieldOffsetInBits);
- ResultElts.push_back(Val);
- NextFieldByteStart = LLVMNaturalByteOffset;
- NextFieldByteStart += TD.getTypeAllocSize(Val->getType());
-}
-
-/// AddBitFieldToRecordConstant - Bitfields can span multiple LLVM fields and
-/// have other annoying properties, thus requiring extra layout rules. This
-/// routine handles the extra complexity and then forwards to
-/// AddFieldToRecordConstant.
-void ConstantLayoutInfo::
-AddBitFieldToRecordConstant(ConstantInt *ValC, uint64_t GCCFieldOffsetInBits) {
- // If the GCC field starts after our current LLVM field then there must have
- // been an anonymous bitfield or other thing that shoved it over. No matter,
- // just insert some i8 padding until there are bits to fill in.
- while (GCCFieldOffsetInBits > NextFieldByteStart*8) {
- ResultElts.push_back(ConstantInt::get(Type::getInt8Ty(Context), 0));
- ++NextFieldByteStart;
- }
-
- // If the field is a bitfield, it could partially go in a previously
- // laid out structure member, and may add elements to the end of the currently
- // laid out structure.
- //
- // Since bitfields can only partially overlap other bitfields, because we
- // always emit components of bitfields as i8, and because we never emit tail
- // padding until we know it exists, this boils down to merging pieces of the
- // bitfield values into i8's. This is also simplified by the fact that
- // bitfields can only be initialized by ConstantInts. An interesting case is
- // sharing of tail padding in C++ structures. Because this can only happen
- // in inheritance cases, and those are non-POD, we should never see them here.
-
- // First handle any part of Val that overlaps an already laid out field by
- // merging it into it. By the above invariants, we know that it is an i8 that
- // we are merging into. Note that we may be inserting *all* of Val into the
- // previous field.
- if (GCCFieldOffsetInBits < NextFieldByteStart*8) {
- unsigned ValBitSize = ValC->getBitWidth();
- assert(!ResultElts.empty() && "Bitfield starts before first element?");
- assert(ResultElts.back()->getType()->isIntegerTy(8) &&
- isa<ConstantInt>(ResultElts.back()) &&
- "Merging bitfield with non-bitfield value?");
- assert(NextFieldByteStart*8 - GCCFieldOffsetInBits < 8 &&
- "Bitfield overlaps backwards more than one field?");
-
- // Figure out how many bits can fit into the previous field given the
- // starting point in that field.
- unsigned BitsInPreviousField =
- unsigned(NextFieldByteStart*8 - GCCFieldOffsetInBits);
- assert(BitsInPreviousField != 0 && "Previous field should not be null!");
-
- // Split the bits that will be inserted into the previous element out of
- // Val into a new constant. If Val is completely contained in the previous
- // element, this sets Val to null, otherwise we shrink Val to contain the
- // bits to insert in the next element.
- APInt ValForPrevField(ValC->getValue());
- if (BitsInPreviousField >= ValBitSize) {
- // The whole field fits into the previous field.
- ValC = 0;
- } else if (!BYTES_BIG_ENDIAN) {
- // Little endian, take bits from the bottom of the field value.
- ValForPrevField = ValForPrevField.trunc(BitsInPreviousField);
- APInt Tmp = ValC->getValue();
- Tmp = Tmp.lshr(BitsInPreviousField);
- Tmp = Tmp.trunc(ValBitSize-BitsInPreviousField);
- ValC = ConstantInt::get(Context, Tmp);
- } else {
- // Big endian, take bits from the top of the field value.
- ValForPrevField = ValForPrevField.lshr(ValBitSize-BitsInPreviousField);
- ValForPrevField = ValForPrevField.trunc(BitsInPreviousField);
-
- APInt Tmp = ValC->getValue();
- Tmp = Tmp.trunc(ValBitSize-BitsInPreviousField);
- ValC = ConstantInt::get(Context, Tmp);
- }
-
- // Okay, we're going to insert ValForPrevField into the previous i8, extend
- // it and shift into place.
- ValForPrevField = ValForPrevField.zext(8);
- if (!BYTES_BIG_ENDIAN) {
- ValForPrevField = ValForPrevField.shl(8-BitsInPreviousField);
- } else {
- // On big endian, if the entire field fits into the remaining space, shift
- // over to not take part of the next field's bits.
- if (BitsInPreviousField > ValBitSize)
- ValForPrevField = ValForPrevField.shl(BitsInPreviousField-ValBitSize);
- }
-
- // "or" in the previous value and install it.
- const APInt &LastElt = cast<ConstantInt>(ResultElts.back())->getValue();
- ResultElts.back() = ConstantInt::get(Context, ValForPrevField | LastElt);
-
- // If the whole bit-field fit into the previous field, we're done.
- if (ValC == 0) return;
- GCCFieldOffsetInBits = NextFieldByteStart*8;
- }
-
- APInt Val = ValC->getValue();
-
- // Okay, we know that we're plopping bytes onto the end of the struct.
- // Iterate while there is stuff to do.
- while (1) {
- ConstantInt *ValToAppend;
- if (Val.getBitWidth() > 8) {
- if (!BYTES_BIG_ENDIAN) {
- // Little endian lays out low bits first.
- APInt Tmp = Val.trunc(8);
- ValToAppend = ConstantInt::get(Context, Tmp);
-
- Val = Val.lshr(8);
- } else {
- // Big endian lays out high bits first.
- APInt Tmp = Val.lshr(Val.getBitWidth()-8).trunc(8);
- ValToAppend = ConstantInt::get(Context, Tmp);
- }
- } else if (Val.getBitWidth() == 8) {
- ValToAppend = ConstantInt::get(Context, Val);
- } else {
- APInt Tmp = Val.zext(8);
-
- if (BYTES_BIG_ENDIAN)
- Tmp = Tmp << 8-Val.getBitWidth();
- ValToAppend = ConstantInt::get(Context, Tmp);
- }
-
- ResultElts.push_back(ValToAppend);
- ++NextFieldByteStart;
-
- if (Val.getBitWidth() <= 8)
- break;
- Val = Val.trunc(Val.getBitWidth()-8);
- }
-}
-
-
-/// HandleTailPadding - Check to see if the struct fields, as laid out so far,
-/// will be large enough to make the generated constant struct have the right
-/// size. If not, add explicit tail padding. If rounding up based on the LLVM
-/// IR alignment would make the struct too large, convert it to a packed LLVM
-/// struct.
-void ConstantLayoutInfo::HandleTailPadding(uint64_t GCCStructBitSize) {
- uint64_t GCCStructSize = (GCCStructBitSize+7)/8;
- uint64_t LLVMNaturalSize =
- TargetData::RoundUpAlignment(NextFieldByteStart, MaxLLVMFieldAlignment);
-
- // If the total size of the laid out data is within the size of the GCC type
- // but the rounded-up size (including the tail padding induced by LLVM
- // alignment) is too big, convert to a packed struct type. We don't do this
- // if the size of the laid out fields is too large because initializers like
- //
- // struct X { int A; char C[]; } x = { 4, "foo" };
- //
- // can occur and no amount of packing will help.
- if (NextFieldByteStart <= GCCStructSize && // Not flexible init case.
- LLVMNaturalSize > GCCStructSize) { // Tail pad will overflow type.
- assert(!StructIsPacked && "LLVM Struct type overflow!");
-
- // Switch to packed.
- ConvertToPacked();
- LLVMNaturalSize = NextFieldByteStart;
-
- // Verify that packing solved the problem.
- assert(LLVMNaturalSize <= GCCStructSize &&
- "Oversized should be handled by packing");
- }
-
- // If the LLVM Size is too small, add some tail padding to fill it in.
- if (LLVMNaturalSize < GCCStructSize) {
- const Type *FillTy = Type::getInt8Ty(Context);
- if (GCCStructSize - NextFieldByteStart != 1)
- FillTy = ArrayType::get(FillTy, GCCStructSize - NextFieldByteStart);
- ResultElts.push_back(Constant::getNullValue(FillTy));
- NextFieldByteStart = GCCStructSize;
-
- // At this point, we know that our struct should have the right size.
- // However, if the size of the struct is not a multiple of the largest
- // element alignment, the rounding could bump up the struct more. In this
- // case, we have to convert the struct to being packed.
- LLVMNaturalSize =
- TargetData::RoundUpAlignment(NextFieldByteStart, MaxLLVMFieldAlignment);
-
- // If the alignment will make the struct too big, convert it to being
- // packed.
- if (LLVMNaturalSize > GCCStructSize) {
- assert(!StructIsPacked && "LLVM Struct type overflow!");
- ConvertToPacked();
- }
- }
-}
-
-static Constant *ConvertRecordCONSTRUCTOR(tree exp) {
- ConstantLayoutInfo LayoutInfo(getTargetData());
-
- tree NextField = TYPE_FIELDS(TREE_TYPE(exp));
- unsigned HOST_WIDE_INT CtorIndex;
- tree FieldValue;
- tree Field; // The FIELD_DECL for the field.
- FOR_EACH_CONSTRUCTOR_ELT(CONSTRUCTOR_ELTS(exp), CtorIndex, Field, FieldValue){
- // If an explicit field is specified, use it.
- if (Field == 0) {
- Field = NextField;
- // Advance to the next FIELD_DECL, skipping over other structure members
- // (e.g. enums).
- while (1) {
- assert(Field && "Fell off end of record!");
- if (TREE_CODE(Field) == FIELD_DECL) break;
- Field = TREE_CHAIN(Field);
- }
- }
-
- // Decode the field's value.
- Constant *Val = ConvertConstant(FieldValue);
-
- // GCCFieldOffsetInBits is where GCC is telling us to put the current field.
- uint64_t GCCFieldOffsetInBits = getFieldOffsetInBits(Field);
- NextField = TREE_CHAIN(Field);
-
- uint64_t FieldSizeInBits = 0;
- if (DECL_SIZE(Field))
- FieldSizeInBits = getInt64(DECL_SIZE(Field), true);
- uint64_t ValueSizeInBits = Val->getType()->getPrimitiveSizeInBits();
- ConstantInt *ValC = dyn_cast<ConstantInt>(Val);
- if (ValC && ValC->isZero() && DECL_SIZE(Field)) {
- // G++ has various bugs handling {} initializers where it doesn't
- // synthesize a zero node of the right type. Instead of figuring out G++,
- // just hack around it by special casing zero and allowing it to be the
- // wrong size.
- if (ValueSizeInBits != FieldSizeInBits) {
- APInt ValAsInt = ValC->getValue();
- ValC = ConstantInt::get(Context, ValueSizeInBits < FieldSizeInBits ?
- ValAsInt.zext(FieldSizeInBits) :
- ValAsInt.trunc(FieldSizeInBits));
- ValueSizeInBits = FieldSizeInBits;
- Val = ValC;
- }
- }
-
- // If this is a non-bitfield value, just slap it onto the end of the struct
- // with the appropriate padding etc. If it is a bitfield, we have more
- // processing to do.
- if (!isBitfield(Field))
- LayoutInfo.AddFieldToRecordConstant(Val, GCCFieldOffsetInBits);
- else {
- // Bitfields can only be initialized with constants (integer constant
- // expressions).
- assert(ValC);
- assert(DECL_SIZE(Field));
- assert(ValueSizeInBits >= FieldSizeInBits &&
- "disagreement between LLVM and GCC on bitfield size");
- if (ValueSizeInBits != FieldSizeInBits) {
- // Fields are allowed to be smaller than their type. Simply discard
- // the unwanted upper bits in the field value.
- APInt ValAsInt = ValC->getValue();
- ValC = ConstantInt::get(Context, ValAsInt.trunc(FieldSizeInBits));
- }
- LayoutInfo.AddBitFieldToRecordConstant(ValC, GCCFieldOffsetInBits);
- }
- }
-
- // Check to see if the struct fields, as laid out so far, will be large enough
- // to make the generated constant struct have the right size. If not, add
- // explicit tail padding. If rounding up based on the LLVM IR alignment would
- // make the struct too large, convert it to a packed LLVM struct.
- tree StructTypeSizeTree = TYPE_SIZE(TREE_TYPE(exp));
- if (StructTypeSizeTree && TREE_CODE(StructTypeSizeTree) == INTEGER_CST)
- LayoutInfo.HandleTailPadding(getInt64(StructTypeSizeTree, true));
-
- // Okay, we're done, return the computed elements.
- return ConstantStruct::get(Context, LayoutInfo.ResultElts,
- LayoutInfo.StructIsPacked);
-}
-
-static Constant *ConvertUnionCONSTRUCTOR(tree exp) {
- assert(!VEC_empty(constructor_elt, CONSTRUCTOR_ELTS(exp))
- && "Union CONSTRUCTOR has no elements? Zero?");
-
- VEC(constructor_elt, gc) *elt = CONSTRUCTOR_ELTS(exp);
- assert(VEC_length(constructor_elt, elt) == 1
- && "Union CONSTRUCTOR with multiple elements?");
-
- ConstantLayoutInfo LayoutInfo(getTargetData());
-
- // Convert the constant itself.
- Constant *Val = ConvertConstant(VEC_index(constructor_elt, elt, 0)->value);
-
- // Unions are initialized using the first member field. Find it.
- tree Field = TYPE_FIELDS(TREE_TYPE(exp));
- assert(Field && "cannot initialize union with no fields");
- while (TREE_CODE(Field) != FIELD_DECL) {
- Field = TREE_CHAIN(Field);
- assert(Field && "cannot initialize union with no fields");
- }
-
- // If this is a non-bitfield value, just slap it onto the end of the struct
- // with the appropriate padding etc. If it is a bitfield, we have more
- // processing to do.
- if (!isBitfield(Field))
- LayoutInfo.AddFieldToRecordConstant(Val, 0);
- else {
- // Bitfields can only be initialized with constants (integer constant
- // expressions).
- ConstantInt *ValC = cast<ConstantInt>(Val);
- uint64_t FieldSizeInBits = getInt64(DECL_SIZE(Field), true);
- uint64_t ValueSizeInBits = Val->getType()->getPrimitiveSizeInBits();
-
- assert(ValueSizeInBits >= FieldSizeInBits &&
- "disagreement between LLVM and GCC on bitfield size");
- if (ValueSizeInBits != FieldSizeInBits) {
- // Fields are allowed to be smaller than their type. Simply discard
- // the unwanted upper bits in the field value.
- APInt ValAsInt = ValC->getValue();
- ValC = ConstantInt::get(Context, ValAsInt.trunc(FieldSizeInBits));
- }
- LayoutInfo.AddBitFieldToRecordConstant(ValC, 0);
- }
-
- // If the union has a fixed size, and if the value we converted isn't large
- // enough to fill all the bits, add a zero initialized array at the end to pad
- // it out.
- tree UnionTypeSizeTree = TYPE_SIZE(TREE_TYPE(exp));
- if (UnionTypeSizeTree && TREE_CODE(UnionTypeSizeTree) == INTEGER_CST)
- LayoutInfo.HandleTailPadding(getInt64(UnionTypeSizeTree, true));
-
- return ConstantStruct::get(Context, LayoutInfo.ResultElts,
- LayoutInfo.StructIsPacked);
-}
-
-static Constant *ConvertCONSTRUCTOR(tree exp) {
- // Please note, that we can have empty ctor, even if array is non-trivial (has
- // nonzero number of entries). This situation is typical for static ctors,
- // when array is filled during program initialization.
- if (CONSTRUCTOR_ELTS(exp) == 0 ||
- VEC_length(constructor_elt, CONSTRUCTOR_ELTS(exp)) == 0) // All zeros?
- return Constant::getNullValue(ConvertType(TREE_TYPE(exp)));
-
- switch (TREE_CODE(TREE_TYPE(exp))) {
- default:
- debug_tree(exp);
- assert(0 && "Unknown ctor!");
- case VECTOR_TYPE:
- case ARRAY_TYPE: return ConvertArrayCONSTRUCTOR(exp);
- case RECORD_TYPE: return ConvertRecordCONSTRUCTOR(exp);
- case QUAL_UNION_TYPE:
- case UNION_TYPE: return ConvertUnionCONSTRUCTOR(exp);
- }
-}
-
-Constant *ConvertConstant(tree exp) {
- assert((TREE_CONSTANT(exp) || TREE_CODE(exp) == STRING_CST) &&
- "Isn't a constant!");
- switch (TREE_CODE(exp)) {
- case FDESC_EXPR: // Needed on itanium
- default:
- debug_tree(exp);
- assert(0 && "Unknown constant to convert!");
- abort();
- case INTEGER_CST: return ConvertINTEGER_CST(exp);
- case REAL_CST: return ConvertREAL_CST(exp);
- case VECTOR_CST: return ConvertVECTOR_CST(exp);
- case STRING_CST: return ConvertSTRING_CST(exp);
- case COMPLEX_CST: return ConvertCOMPLEX_CST(exp);
- case NOP_EXPR: return ConvertNOP_EXPR(exp);
- case CONVERT_EXPR: return ConvertCONVERT_EXPR(exp);
- case PLUS_EXPR:
- case MINUS_EXPR: return ConvertBinOp_CST(exp);
- case CONSTRUCTOR: return ConvertCONSTRUCTOR(exp);
- case VIEW_CONVERT_EXPR: return ConvertConstant(TREE_OPERAND(exp, 0));
- case POINTER_PLUS_EXPR: return ConvertPOINTER_PLUS_EXPR(exp);
- case ADDR_EXPR:
- return TheFolder->CreateBitCast(EmitAddressOf(TREE_OPERAND(exp, 0)),
- ConvertType(TREE_TYPE(exp)));
- }
-}
-
-/// get_constant_alignment - Return the alignment of constant EXP in bits.
-static unsigned int
-get_constant_alignment (tree exp)
-{
- unsigned int align = TYPE_ALIGN (TREE_TYPE (exp));
-#ifdef CONSTANT_ALIGNMENT
- align = CONSTANT_ALIGNMENT (exp, align);
-#endif
- return align;
-}
-
-static Constant *EmitAddressOfDecl(tree exp) {
- GlobalValue *Val = cast<GlobalValue>(DEFINITION_LLVM(exp));
-
- // The type of the global value output for exp need not match that of exp.
- // For example if the global's initializer has a different type to the global
- // itself (allowed in GCC but not in LLVM) then the global is changed to have
- // the type of the initializer. Correct for this now.
- const Type *Ty = ConvertType(TREE_TYPE(exp));
- if (Ty->isVoidTy()) Ty = Type::getInt8Ty(Context); // void* -> i8*.
-
- return TheFolder->CreateBitCast(Val, Ty->getPointerTo());
-}
-
-/// EmitAddressOfLABEL_DECL - Someone took the address of a label.
-static Constant *EmitAddressOfLABEL_DECL(tree exp) {
- extern TreeToLLVM *TheTreeToLLVM;
-
- assert(TheTreeToLLVM &&
- "taking the address of a label while not compiling the function!");
-
- // Figure out which function this is for, verify it's the one we're compiling.
- if (DECL_CONTEXT(exp)) {
- assert(TREE_CODE(DECL_CONTEXT(exp)) == FUNCTION_DECL &&
- "Address of label in nested function?");
- assert(TheTreeToLLVM->getFUNCTION_DECL() == DECL_CONTEXT(exp) &&
- "Taking the address of a label that isn't in the current fn!?");
- }
-
- return TheTreeToLLVM->EmitLV_LABEL_DECL(exp);
-}
-
-static Constant *EmitAddressOfCOMPLEX_CST(tree exp) {
- Constant *Init = ConvertCOMPLEX_CST(exp);
-
- // Cache the constants to avoid making obvious duplicates that have to be
- // folded by the optimizer.
- static std::map<Constant*, GlobalVariable*> ComplexCSTCache;
- GlobalVariable *&Slot = ComplexCSTCache[Init];
- if (Slot) return Slot;
-
- // Create a new complex global.
- Slot = new GlobalVariable(*TheModule, Init->getType(), true,
- GlobalVariable::PrivateLinkage, Init, ".cpx");
- Slot->setAlignment(get_constant_alignment(exp) / 8);
-
- return Slot;
-}
-
-static Constant *EmitAddressOfREAL_CST(tree exp) {
- Constant *Init = ConvertREAL_CST(exp);
-
- // Cache the constants to avoid making obvious duplicates that have to be
- // folded by the optimizer.
- static std::map<Constant*, GlobalVariable*> RealCSTCache;
- GlobalVariable *&Slot = RealCSTCache[Init];
- if (Slot) return Slot;
-
- // Create a new real global.
- Slot = new GlobalVariable(*TheModule, Init->getType(), true,
- GlobalVariable::PrivateLinkage, Init, ".rl");
- Slot->setAlignment(get_constant_alignment(exp) / 8);
-
- return Slot;
-}
-
-static Constant *EmitAddressOfSTRING_CST(tree exp) {
- Constant *Init = ConvertSTRING_CST(exp);
-
- GlobalVariable **SlotP = 0;
-
- // Cache the string constants to avoid making obvious duplicate strings that
- // have to be folded by the optimizer.
- static std::map<Constant*, GlobalVariable*> StringCSTCache;
- GlobalVariable *&Slot = StringCSTCache[Init];
- if (Slot) return Slot;
- SlotP = &Slot;
-
- // Create a new string global.
- GlobalVariable *GV = new GlobalVariable(*TheModule, Init->getType(), true,
- GlobalVariable::PrivateLinkage, Init,
- ".str");
- GV->setAlignment(get_constant_alignment(exp) / 8);
-
- if (SlotP) *SlotP = GV;
- return GV;
-}
-
-static Constant *EmitAddressOfARRAY_REF(tree exp) {
- tree Array = TREE_OPERAND(exp, 0);
- tree Index = TREE_OPERAND(exp, 1);
- tree IndexType = TREE_TYPE(Index);
- assert(TREE_CODE(TREE_TYPE(Array)) == ARRAY_TYPE && "Unknown ARRAY_REF!");
-
- // Check for variable sized reference.
- // FIXME: add support for array types where the size doesn't fit into 64 bits
- assert(isSequentialCompatible(TREE_TYPE(Array)) &&
- "Global with variable size?");
-
- // First subtract the lower bound, if any, in the type of the index.
- Constant *IndexVal = ConvertConstant(Index);
- tree LowerBound = array_ref_low_bound(exp);
- if (!integer_zerop(LowerBound))
- IndexVal = TheFolder->CreateSub(IndexVal, ConvertConstant(LowerBound),
- hasNUW(TREE_TYPE(Index)),
- hasNSW(TREE_TYPE(Index)));
-
- const Type *IntPtrTy = getTargetData().getIntPtrType(Context);
- IndexVal = TheFolder->CreateIntCast(IndexVal, IntPtrTy,
- /*isSigned*/!TYPE_UNSIGNED(IndexType));
-
- // Avoid any assumptions about how the array type is represented in LLVM by
- // doing the GEP on a pointer to the first array element.
- Constant *ArrayAddr = EmitAddressOf(Array);
- const Type *EltTy = ConvertType(TREE_TYPE(TREE_TYPE(Array)));
- ArrayAddr = TheFolder->CreateBitCast(ArrayAddr, EltTy->getPointerTo());
-
- return POINTER_TYPE_OVERFLOW_UNDEFINED ?
- TheFolder->CreateInBoundsGetElementPtr(ArrayAddr, &IndexVal, 1) :
- TheFolder->CreateGetElementPtr(ArrayAddr, &IndexVal, 1);
-}
-
-static Constant *EmitAddressOfCOMPONENT_REF(tree exp) {
- Constant *StructAddrLV = EmitAddressOf(TREE_OPERAND(exp, 0));
-
- tree FieldDecl = TREE_OPERAND(exp, 1);
- const Type *StructTy = ConvertType(DECL_CONTEXT(FieldDecl));
-
- StructAddrLV = TheFolder->CreateBitCast(StructAddrLV,
- StructTy->getPointerTo());
- const Type *FieldTy = ConvertType(TREE_TYPE(FieldDecl));
-
- // BitStart - This is the actual offset of the field from the start of the
- // struct, in bits. For bitfields this may be on a non-byte boundary.
- unsigned BitStart;
- Constant *FieldPtr;
-
- // If the GCC field directly corresponds to an LLVM field, handle it.
- unsigned MemberIndex = GetFieldIndex(FieldDecl, StructTy);
- if (MemberIndex < INT_MAX) {
- // Get a pointer to the byte in which the GCC field starts.
- Constant *Ops[] = {
- Constant::getNullValue(Type::getInt32Ty(Context)),
- ConstantInt::get(Type::getInt32Ty(Context), MemberIndex)
- };
- FieldPtr = TheFolder->CreateInBoundsGetElementPtr(StructAddrLV, Ops, 2);
- // Within that byte, the bit at which the GCC field starts.
- BitStart = TREE_INT_CST_LOW(DECL_FIELD_BIT_OFFSET(TREE_OPERAND(exp, 1)));
- BitStart &= 7;
- } else {
- // Offset will hold the field offset in octets.
- Constant *Offset;
-
- assert(!(BITS_PER_UNIT & 7) && "Unit size not a multiple of 8 bits!");
- if (TREE_OPERAND(exp, 2)) {
- Offset = ConvertConstant(TREE_OPERAND(exp, 2));
- // At this point the offset is measured in units divided by (exactly)
- // (DECL_OFFSET_ALIGN / BITS_PER_UNIT). Convert to octets.
- unsigned factor = DECL_OFFSET_ALIGN(FieldDecl) / 8;
- if (factor != 1)
- Offset = TheFolder->CreateMul(Offset,
- ConstantInt::get(Offset->getType(),
- factor));
- } else {
- assert(DECL_FIELD_OFFSET(FieldDecl) && "Field offset not available!");
- Offset = ConvertConstant(DECL_FIELD_OFFSET(FieldDecl));
- // At this point the offset is measured in units. Convert to octets.
- unsigned factor = BITS_PER_UNIT / 8;
- if (factor != 1)
- Offset = TheFolder->CreateMul(Offset,
- ConstantInt::get(Offset->getType(),
- factor));
- }
-
- // Here BitStart gives the offset of the field in bits from Offset.
- BitStart = getInt64(DECL_FIELD_BIT_OFFSET(FieldDecl), true);
- // Incorporate as much of it as possible into the pointer computation.
- unsigned ByteOffset = BitStart/8;
- if (ByteOffset > 0) {
- Offset = TheFolder->CreateAdd(Offset,
- ConstantInt::get(Offset->getType(),
- ByteOffset));
- BitStart -= ByteOffset*8;
- }
-
- const Type *BytePtrTy = Type::getInt8PtrTy(Context);
- FieldPtr = TheFolder->CreateBitCast(StructAddrLV, BytePtrTy);
- FieldPtr = TheFolder->CreateInBoundsGetElementPtr(FieldPtr, &Offset, 1);
- FieldPtr = TheFolder->CreateBitCast(FieldPtr, FieldTy->getPointerTo());
- }
-
- // Make sure we return a pointer to the right type.
- const Type *EltTy = ConvertType(TREE_TYPE(exp));
- FieldPtr = TheFolder->CreateBitCast(FieldPtr, EltTy->getPointerTo());
-
- assert(BitStart == 0 &&
- "It's a bitfield reference or we didn't get to the field!");
- return FieldPtr;
-}
-
-Constant *EmitAddressOf(tree exp) {
- Constant *LV;
-
- switch (TREE_CODE(exp)) {
- default:
- debug_tree(exp);
- assert(0 && "Unknown constant lvalue to convert!");
- abort();
- case FUNCTION_DECL:
- case CONST_DECL:
- case VAR_DECL:
- LV = EmitAddressOfDecl(exp);
- break;
- case LABEL_DECL:
- LV = EmitAddressOfLABEL_DECL(exp);
- break;
- case COMPLEX_CST:
- LV = EmitAddressOfCOMPLEX_CST(exp);
- break;
- case REAL_CST:
- LV = EmitAddressOfREAL_CST(exp);
- break;
- case STRING_CST:
- LV = EmitAddressOfSTRING_CST(exp);
- break;
- case COMPONENT_REF:
- LV = EmitAddressOfCOMPONENT_REF(exp);
- break;
- case ARRAY_RANGE_REF:
- case ARRAY_REF:
- LV = EmitAddressOfARRAY_REF(exp);
- break;
- case INDIRECT_REF:
- // The lvalue is just the address.
- LV = ConvertConstant(TREE_OPERAND(exp, 0));
- break;
- case COMPOUND_LITERAL_EXPR: // FIXME: not gimple - defined by C front-end
- /* This used to read
- return EmitAddressOf(COMPOUND_LITERAL_EXPR_DECL(exp));
- but gcc warns about that and there doesn't seem to be any way to stop it
- with casts or the like. The following is equivalent with no checking
- (since we know TREE_CODE(exp) is COMPOUND_LITERAL_EXPR the checking
- doesn't accomplish anything anyway). */
- LV = EmitAddressOf(DECL_EXPR_DECL (TREE_OPERAND (exp, 0)));
- break;
- }
-
- // Check that the type of the lvalue is indeed that of a pointer to the tree
- // node. Since LLVM has no void* type, don't insist that void* be converted
- // to a specific LLVM type.
- assert((VOID_TYPE_P(TREE_TYPE(exp)) ||
- LV->getType() == ConvertType(TREE_TYPE(exp))->getPointerTo()) &&
- "LValue of constant has wrong type!");
-
- return LV;
-}
Removed: dragonegg/trunk/llvm-constant.h
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/llvm-constant.h?rev=127407&view=auto
==============================================================================
--- dragonegg/trunk/llvm-constant.h (original)
+++ dragonegg/trunk/llvm-constant.h (removed)
@@ -1,39 +0,0 @@
-//=--- llvm-constant.h - Converting and working with constants --*- C++ -*---=//
-//
-// Copyright (C) 2011 Duncan Sands.
-//
-// This file is part of DragonEgg.
-//
-// DragonEgg is free software; you can redistribute it and/or modify it under
-// the terms of the GNU General Public License as published by the Free Software
-// Foundation; either version 2, or (at your option) any later version.
-//
-// DragonEgg is distributed in the hope that it will be useful, but WITHOUT ANY
-// WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
-// A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-//
-// You should have received a copy of the GNU General Public License along with
-// DragonEgg; see the file COPYING. If not, write to the Free Software
-// Foundation, 51 Franklin Street, Suite 500, Boston, MA 02110-1335, USA.
-//
-//===----------------------------------------------------------------------===//
-// This file declares functions for converting GCC constants to LLVM and working
-// with them.
-//===----------------------------------------------------------------------===//
-
-#ifndef DRAGONEGG_CONSTANT_H
-#define DRAGONEGG_CONSTANT_H
-
-union tree_node;
-
-namespace llvm {
- class Constant;
-}
-
-// Constant Expressions
-extern llvm::Constant *ConvertConstant(tree_node *exp);
-
-// Constant Expression l-values.
-extern llvm::Constant *EmitAddressOf(tree_node *exp);
-
-#endif /* DRAGONEGG_CONSTANT_H */
Removed: dragonegg/trunk/llvm-convert.cpp
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/llvm-convert.cpp?rev=127407&view=auto
==============================================================================
--- dragonegg/trunk/llvm-convert.cpp (original)
+++ dragonegg/trunk/llvm-convert.cpp (removed)
@@ -1,8027 +0,0 @@
-//===---------- llvm-convert.cpp - Converting gimple to LLVM IR -----------===//
-//
-// Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 Chris Lattner,
-// Duncan Sands et al.
-//
-// This file is part of DragonEgg.
-//
-// DragonEgg is free software; you can redistribute it and/or modify it under
-// the terms of the GNU General Public License as published by the Free Software
-// Foundation; either version 2, or (at your option) any later version.
-//
-// DragonEgg is distributed in the hope that it will be useful, but WITHOUT ANY
-// WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
-// A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-//
-// You should have received a copy of the GNU General Public License along with
-// DragonEgg; see the file COPYING. If not, write to the Free Software
-// Foundation, 51 Franklin Street, Suite 500, Boston, MA 02110-1335, USA.
-//
-//===----------------------------------------------------------------------===//
-// This is the code that converts GCC AST nodes into LLVM code.
-//===----------------------------------------------------------------------===//
-
-// Plugin headers
-#include "llvm-abi.h"
-#include "llvm-constant.h"
-#include "llvm-debug.h"
-#include "llvm-tree.h"
-
-// LLVM headers
-#include "llvm/Module.h"
-#include "llvm/Support/CFG.h"
-#include "llvm/Support/Debug.h"
-#include "llvm/Support/ErrorHandling.h"
-#include "llvm/Support/Host.h"
-#include "llvm/Target/TargetLowering.h"
-#include "llvm/ADT/Statistic.h"
-#include "llvm/ADT/StringExtras.h"
-
-// System headers
-#include <gmp.h>
-
-// GCC headers
-extern "C" {
-#include "config.h"
-// Stop GCC declaring 'getopt' as it can clash with the system's declaration.
-#undef HAVE_DECL_GETOPT
-#include "system.h"
-#include "coretypes.h"
-#include "target.h"
-#include "tree.h"
-
-#include "diagnostic.h"
-#include "except.h"
-#include "flags.h"
-#include "langhooks.h"
-#include "output.h"
-#include "rtl.h"
-#include "tm_p.h"
-#include "toplev.h"
-#include "tree-flow.h"
-#include "tree-pass.h"
-
-extern int get_pointer_alignment (tree exp, unsigned int max_align);
-extern enum machine_mode reg_raw_mode[FIRST_PSEUDO_REGISTER];
-}
-
-static LLVMContext &Context = getGlobalContext();
-
-STATISTIC(NumBasicBlocks, "Number of basic blocks converted");
-STATISTIC(NumStatements, "Number of gimple statements converted");
-
-/// dump - Print a gimple statement to standard error.
-void dump(gimple stmt) {
- print_gimple_stmt(stderr, stmt, 0, TDF_RAW);
-}
-
-/// getINTEGER_CSTVal - Return the specified INTEGER_CST value as a uint64_t.
-///
-uint64_t getINTEGER_CSTVal(tree exp) {
- unsigned HOST_WIDE_INT HI = (unsigned HOST_WIDE_INT)TREE_INT_CST_HIGH(exp);
- unsigned HOST_WIDE_INT LO = (unsigned HOST_WIDE_INT)TREE_INT_CST_LOW(exp);
- if (HOST_BITS_PER_WIDE_INT == 64) {
- return (uint64_t)LO;
- } else {
- assert(HOST_BITS_PER_WIDE_INT == 32 &&
- "Only 32- and 64-bit hosts supported!");
- return ((uint64_t)HI << 32) | (uint64_t)LO;
- }
-}
-
-/// isInt64 - Return true if t is an INTEGER_CST that fits in a 64 bit integer.
-/// If Unsigned is false, returns whether it fits in a int64_t. If Unsigned is
-/// true, returns whether the value is non-negative and fits in a uint64_t.
-/// Always returns false for overflowed constants.
-bool isInt64(tree t, bool Unsigned) {
- if (!t)
- return false;
- if (HOST_BITS_PER_WIDE_INT == 64)
- return host_integerp(t, Unsigned) && !TREE_OVERFLOW (t);
- assert(HOST_BITS_PER_WIDE_INT == 32 &&
- "Only 32- and 64-bit hosts supported!");
- return
- (TREE_CODE (t) == INTEGER_CST && !TREE_OVERFLOW (t))
- && ((TYPE_UNSIGNED(TREE_TYPE(t)) == Unsigned) ||
- // If the constant is signed and we want an unsigned result, check
- // that the value is non-negative. If the constant is unsigned and
- // we want a signed result, check it fits in 63 bits.
- (HOST_WIDE_INT)TREE_INT_CST_HIGH(t) >= 0);
-}
-
-/// getInt64 - Extract the value of an INTEGER_CST as a 64 bit integer. If
-/// Unsigned is false, the value must fit in a int64_t. If Unsigned is true,
-/// the value must be non-negative and fit in a uint64_t. Must not be used on
-/// overflowed constants. These conditions can be checked by calling isInt64.
-uint64_t getInt64(tree t, bool Unsigned) {
- assert(isInt64(t, Unsigned) && "invalid constant!");
- (void)Unsigned; // Otherwise unused if asserts off - avoid compiler warning.
- return getINTEGER_CSTVal(t);
-}
-
-/// getPointerAlignment - Return the alignment in bytes of exp, a pointer valued
-/// expression, or 1 if the alignment is not known.
-static unsigned int getPointerAlignment(tree exp) {
- assert(POINTER_TYPE_P (TREE_TYPE (exp)) && "Expected a pointer type!");
- unsigned int align = get_pointer_alignment(exp, BIGGEST_ALIGNMENT) / 8;
- return align ? align : 1;
-}
-
-/// getSSAPlaceholder - A fake value associated with an SSA name when the name
-/// is used before being defined (this can occur because basic blocks are not
-/// output in dominator order). Replaced with the correct value when the SSA
-/// name's definition is encountered.
-static Value *GetSSAPlaceholder(const Type *Ty) {
- // Cannot use a constant, since there is no way to distinguish a fake value
- // from a real value. So use an instruction with no parent. This needs to
- // be an instruction that can return a struct type, since the SSA name might
- // be a complex number. It could be a PHINode, except that the GCC phi node
- // conversion logic also constructs phi nodes with no parent. A SelectInst
- // would work, but a LoadInst seemed neater.
- return new LoadInst(UndefValue::get(Ty->getPointerTo()), NULL);
-}
-
-/// isSSAPlaceholder - Whether this is a fake value being used as a placeholder
-/// for the definition of an SSA name.
-static bool isSSAPlaceholder(Value *V) {
- LoadInst *LI = dyn_cast<LoadInst>(V);
- return LI && !LI->getParent();
-}
-
-/// NameValue - Try to name the given value after the given GCC tree node. If
-/// the GCC tree node has no sensible name then it does nothing. If the value
-/// already has a name then it is not changed.
-static void NameValue(Value *V, tree t) {
- if (!V->hasName()) {
- const std::string &Name = getDescriptiveName(t);
- if (!Name.empty())
- V->setName(Name);
- }
-}
-
-/// SelectFPName - Helper for choosing a name depending on whether a floating
-/// point type is float, double or long double.
-static StringRef SelectFPName(tree type, StringRef FloatName,
- StringRef DoubleName, StringRef LongDoubleName) {
- assert(SCALAR_FLOAT_TYPE_P(type) && "Expected a floating point type!");
- if (TYPE_MODE(type) == TYPE_MODE(float_type_node))
- return FloatName;
- if (TYPE_MODE(type) == TYPE_MODE(double_type_node))
- return DoubleName;
- assert(TYPE_MODE(type) == TYPE_MODE(long_double_type_node) &&
- "Unknown floating point type!");
- return LongDoubleName;
-}
-
-
-//===----------------------------------------------------------------------===//
-// ... High-Level Methods ...
-//===----------------------------------------------------------------------===//
-
-/// TheTreeToLLVM - Keep track of the current function being compiled.
-TreeToLLVM *TheTreeToLLVM = 0;
-
-const TargetData &getTargetData() {
- return *TheTarget->getTargetData();
-}
-
-/// EmitDebugInfo - Return true if debug info is to be emitted for current
-/// function.
-bool TreeToLLVM::EmitDebugInfo() {
- if (TheDebugInfo && !DECL_IGNORED_P(getFUNCTION_DECL()))
- return true;
- return false;
-}
-
-TreeToLLVM::TreeToLLVM(tree fndecl) :
- TD(getTargetData()), Builder(Context, *TheFolder) {
- FnDecl = fndecl;
- AllocaInsertionPoint = 0;
- Fn = 0;
- ReturnBB = 0;
- ReturnOffset = 0;
- RewindBB = 0;
- RewindTmp = 0;
-
- if (EmitDebugInfo()) {
- expanded_location Location = expand_location(DECL_SOURCE_LOCATION (fndecl));
-
- if (Location.file) {
- TheDebugInfo->setLocationFile(Location.file);
- TheDebugInfo->setLocationLine(Location.line);
- } else {
- TheDebugInfo->setLocationFile("");
- TheDebugInfo->setLocationLine(0);
- }
- }
-
- assert(TheTreeToLLVM == 0 && "Reentering function creation?");
- TheTreeToLLVM = this;
-}
-
-TreeToLLVM::~TreeToLLVM() {
- TheTreeToLLVM = 0;
-}
-
-//===----------------------------------------------------------------------===//
-// ... Local declarations ...
-//===----------------------------------------------------------------------===//
-
-/// isLocalDecl - Whether this declaration is local to the current function.
-static bool isLocalDecl(tree decl) {
- assert(HAS_RTL_P(decl) && "Expected a declaration with RTL!");
- return DECL_CONTEXT(decl) == current_function_decl &&
- !TREE_STATIC(decl) && // Static variables not considered local.
- TREE_CODE(decl) != FUNCTION_DECL; // Nested functions not considered local.
-}
-
-/// set_decl_local - Remember the LLVM value for a GCC declaration.
-Value *TreeToLLVM::set_decl_local(tree decl, Value *V) {
- if (!isLocalDecl(decl))
- return set_decl_llvm(decl, V);
- if (V != NULL)
- return LocalDecls[decl] = V;
- LocalDecls.erase(decl);
- return NULL;
-}
-
-/// get_decl_local - Retrieve the LLVM value for a GCC declaration, or NULL.
-Value *TreeToLLVM::get_decl_local(tree decl) {
- if (!isLocalDecl(decl))
- return get_decl_llvm(decl);
- DenseMap<tree, AssertingVH<Value> >::iterator I = LocalDecls.find(decl);
- if (I != LocalDecls.end())
- return I->second;
- return NULL;
-}
-
-/// make_decl_local - Return the LLVM value for a GCC declaration if it exists.
-/// Otherwise creates and returns an appropriate value.
-Value *TreeToLLVM::make_decl_local(tree decl) {
- if (!isLocalDecl(decl))
- return make_decl_llvm(decl);
-
- DenseMap<tree, AssertingVH<Value> >::iterator I = LocalDecls.find(decl);
- if (I != LocalDecls.end())
- return I->second;
-
- switch (TREE_CODE(decl)) {
- default:
- llvm_unreachable("Unhandled local declaration!");
-
- case RESULT_DECL:
- case VAR_DECL:
- EmitAutomaticVariableDecl(decl);
- I = LocalDecls.find(decl);
- assert(I != LocalDecls.end() && "Not a local variable?");
- return I->second;
- }
-}
-
-/// make_definition_local - Ensure that the body or initial value of the given
-/// GCC declaration will be output, and return a declaration for it.
-Value *TreeToLLVM::make_definition_local(tree decl) {
- if (!isLocalDecl(decl))
- return make_definition_llvm(decl);
- return make_decl_local(decl);
-}
-
-/// llvm_store_scalar_argument - Store scalar argument ARGVAL of type
-/// LLVMTY at location LOC.
-static void llvm_store_scalar_argument(Value *Loc, Value *ArgVal,
- const llvm::Type *LLVMTy,
- unsigned RealSize,
- LLVMBuilder &Builder) {
- if (RealSize) {
- // Not clear what this is supposed to do on big endian machines...
- assert(!BYTES_BIG_ENDIAN && "Unsupported case - please report");
- // Do byte wise store because actual argument type does not match LLVMTy.
- assert(ArgVal->getType()->isIntegerTy() && "Expected an integer value!");
- const Type *StoreType = IntegerType::get(Context, RealSize * 8);
- Loc = Builder.CreateBitCast(Loc, StoreType->getPointerTo());
- if (ArgVal->getType()->getPrimitiveSizeInBits() >=
- StoreType->getPrimitiveSizeInBits())
- ArgVal = Builder.CreateTrunc(ArgVal, StoreType);
- else
- ArgVal = Builder.CreateZExt(ArgVal, StoreType);
- Builder.CreateStore(ArgVal, Loc);
- } else {
- // This cast only involves pointers, therefore BitCast.
- Loc = Builder.CreateBitCast(Loc, LLVMTy->getPointerTo());
- Builder.CreateStore(ArgVal, Loc);
- }
-}
-
-#ifndef LLVM_STORE_SCALAR_ARGUMENT
-#define LLVM_STORE_SCALAR_ARGUMENT(LOC,ARG,TYPE,SIZE,BUILDER) \
- llvm_store_scalar_argument((LOC),(ARG),(TYPE),(SIZE),(BUILDER))
-#endif
-
-// This is true for types whose alignment when passed on the stack is less
-// than the alignment of the type.
-#define LLVM_BYVAL_ALIGNMENT_TOO_SMALL(T) \
- (LLVM_BYVAL_ALIGNMENT(T) && LLVM_BYVAL_ALIGNMENT(T) < TYPE_ALIGN_UNIT(T))
-
-namespace {
- /// FunctionPrologArgumentConversion - This helper class is driven by the ABI
- /// definition for this target to figure out how to retrieve arguments from
- /// the stack/regs coming into a function and store them into an appropriate
- /// alloca for the argument.
- struct FunctionPrologArgumentConversion : public DefaultABIClient {
- tree FunctionDecl;
- Function::arg_iterator &AI;
- LLVMBuilder Builder;
- std::vector<Value*> LocStack;
- std::vector<std::string> NameStack;
- unsigned Offset;
- CallingConv::ID &CallingConv;
- bool isShadowRet;
- FunctionPrologArgumentConversion(tree FnDecl,
- Function::arg_iterator &ai,
- const LLVMBuilder &B, CallingConv::ID &CC)
- : FunctionDecl(FnDecl), AI(ai), Builder(B), Offset(0), CallingConv(CC),
- isShadowRet(false) {}
-
- /// getCallingConv - This provides the desired CallingConv for the function.
- CallingConv::ID& getCallingConv(void) { return CallingConv; }
-
- void HandlePad(const llvm::Type * /*LLVMTy*/) {
- ++AI;
- }
-
- bool isShadowReturn() const {
- return isShadowRet;
- }
- void setName(const std::string &Name) {
- NameStack.push_back(Name);
- }
- void setLocation(Value *Loc) {
- LocStack.push_back(Loc);
- }
- void clear() {
- assert(NameStack.size() == 1 && LocStack.size() == 1 && "Imbalance!");
- NameStack.clear();
- LocStack.clear();
- }
-
- void HandleAggregateShadowResult(const PointerType * /*PtrArgTy*/,
- bool /*RetPtr*/) {
- // If the function returns a structure by value, we transform the function
- // to take a pointer to the result as the first argument of the function
- // instead.
- assert(AI != Builder.GetInsertBlock()->getParent()->arg_end() &&
- "No explicit return value?");
- AI->setName("agg.result");
-
- isShadowRet = true;
- tree ResultDecl = DECL_RESULT(FunctionDecl);
- tree RetTy = TREE_TYPE(TREE_TYPE(FunctionDecl));
- if (TREE_CODE(RetTy) == TREE_CODE(TREE_TYPE(ResultDecl))) {
- TheTreeToLLVM->set_decl_local(ResultDecl, AI);
- ++AI;
- return;
- }
-
- // Otherwise, this must be something returned with NRVO.
- assert(TREE_CODE(TREE_TYPE(ResultDecl)) == REFERENCE_TYPE &&
- "Not type match and not passing by reference?");
- // Create an alloca for the ResultDecl.
- Value *Tmp = TheTreeToLLVM->CreateTemporary(AI->getType());
- Builder.CreateStore(AI, Tmp);
-
- TheTreeToLLVM->set_decl_local(ResultDecl, Tmp);
- if (TheDebugInfo && !DECL_IGNORED_P(FunctionDecl)) {
- TheDebugInfo->EmitDeclare(ResultDecl,
- dwarf::DW_TAG_return_variable,
- "agg.result", RetTy, Tmp,
- Builder);
- }
- ++AI;
- }
-
- void HandleScalarShadowResult(const PointerType * /*PtrArgTy*/,
- bool /*RetPtr*/) {
- assert(AI != Builder.GetInsertBlock()->getParent()->arg_end() &&
- "No explicit return value?");
- AI->setName("scalar.result");
- isShadowRet = true;
- TheTreeToLLVM->set_decl_local(DECL_RESULT(FunctionDecl), AI);
- ++AI;
- }
-
- void HandleScalarArgument(const llvm::Type *LLVMTy, tree /*type*/,
- unsigned RealSize = 0) {
- Value *ArgVal = AI;
- if (ArgVal->getType() != LLVMTy) {
- if (ArgVal->getType()->isPointerTy() && LLVMTy->isPointerTy()) {
- // If this is GCC being sloppy about pointer types, insert a bitcast.
- // See PR1083 for an example.
- ArgVal = Builder.CreateBitCast(ArgVal, LLVMTy);
- } else if (ArgVal->getType()->isDoubleTy()) {
- // If this is a K&R float parameter, it got promoted to double. Insert
- // the truncation to float now.
- ArgVal = Builder.CreateFPTrunc(ArgVal, LLVMTy,
- NameStack.back().c_str());
- } else {
- // If this is just a mismatch between integer types, this is due
- // to K&R prototypes, where the forward proto defines the arg as int
- // and the actual impls is a short or char.
- assert(ArgVal->getType()->isIntegerTy(32) && LLVMTy->isIntegerTy() &&
- "Lowerings don't match?");
- ArgVal = Builder.CreateTrunc(ArgVal, LLVMTy,NameStack.back().c_str());
- }
- }
- assert(!LocStack.empty());
- Value *Loc = LocStack.back();
- LLVM_STORE_SCALAR_ARGUMENT(Loc,ArgVal,LLVMTy,RealSize,Builder);
- AI->setName(NameStack.back());
- ++AI;
- }
-
- void HandleByValArgument(const llvm::Type * /*LLVMTy*/, tree type) {
- if (LLVM_BYVAL_ALIGNMENT_TOO_SMALL(type)) {
- // Incoming object on stack is insufficiently aligned for the type.
- // Make a correctly aligned copy.
- assert(!LocStack.empty());
- Value *Loc = LocStack.back();
- // We cannot use field-by-field copy here; x86 long double is 16
- // bytes, but only 10 are copied. If the object is really a union
- // we might need the other bytes. We must also be careful to use
- // the smaller alignment.
- const Type *SBP = Type::getInt8PtrTy(Context);
- const Type *IntPtr = getTargetData().getIntPtrType(Context);
- Value *Ops[5] = {
- Builder.CreateCast(Instruction::BitCast, Loc, SBP),
- Builder.CreateCast(Instruction::BitCast, AI, SBP),
- ConstantInt::get(IntPtr,
- TREE_INT_CST_LOW(TYPE_SIZE_UNIT(type))),
- ConstantInt::get(Type::getInt32Ty(Context),
- LLVM_BYVAL_ALIGNMENT(type)),
- ConstantInt::get(Type::getInt1Ty(Context), false)
- };
- const Type *ArgTypes[3] = {SBP, SBP, IntPtr };
- Builder.CreateCall(Intrinsic::getDeclaration(TheModule,
- Intrinsic::memcpy,
- ArgTypes, 3), Ops, Ops+5);
-
- AI->setName(NameStack.back());
- }
- ++AI;
- }
-
- void HandleFCAArgument(const llvm::Type * /*LLVMTy*/, tree /*type*/) {
- // Store the FCA argument into alloca.
- assert(!LocStack.empty());
- Value *Loc = LocStack.back();
- Builder.CreateStore(AI, Loc);
- AI->setName(NameStack.back());
- ++AI;
- }
-
- void HandleAggregateResultAsScalar(const Type * /*ScalarTy*/,
- unsigned Offset = 0) {
- this->Offset = Offset;
- }
-
- void EnterField(unsigned FieldNo, const llvm::Type *StructTy) {
- NameStack.push_back(NameStack.back()+"."+utostr(FieldNo));
-
- Value *Loc = LocStack.back();
- // This cast only involves pointers, therefore BitCast.
- Loc = Builder.CreateBitCast(Loc, StructTy->getPointerTo());
-
- Loc = Builder.CreateStructGEP(Loc, FieldNo);
- LocStack.push_back(Loc);
- }
- void ExitField() {
- NameStack.pop_back();
- LocStack.pop_back();
- }
- };
-}
-
-// isPassedByVal - Return true if an aggregate of the specified type will be
-// passed in memory byval.
-static bool isPassedByVal(tree type, const Type *Ty,
- std::vector<const Type*> &ScalarArgs,
- bool isShadowRet, CallingConv::ID &/*CC*/) {
- if (LLVM_SHOULD_PASS_AGGREGATE_USING_BYVAL_ATTR(type, Ty))
- return true;
-
- std::vector<const Type*> Args;
- if (LLVM_SHOULD_PASS_AGGREGATE_IN_MIXED_REGS(type, Ty, CC, Args) &&
- LLVM_AGGREGATE_PARTIALLY_PASSED_IN_REGS(Args, ScalarArgs, isShadowRet,
- CC))
- // We want to pass the whole aggregate in registers but only some of the
- // registers are available.
- return true;
- return false;
-}
-
-void TreeToLLVM::StartFunctionBody() {
- std::string Name = getLLVMAssemblerName(FnDecl).str();
- // TODO: Add support for dropping the leading '\1' in order to support
- // unsigned bswap(unsigned) __asm__("llvm.bswap");
- // This would also require adjustments in make_decl_llvm.
-
- // Determine the FunctionType and calling convention for this function.
- tree static_chain = cfun->static_chain_decl;
- const FunctionType *FTy;
- CallingConv::ID CallingConv;
- AttrListPtr PAL;
-
- bool getFunctionTypeFromArgList = false;
-
- // If the function has no arguments and is varargs (...), turn it into a
- // non-varargs function by scanning the param list for the function. This
- // allows C functions declared as "T foo() {}" to be treated like
- // "T foo(void) {}" and allows us to handle functions with K&R-style
- // definitions correctly.
- //
- // Note that we only do this in C/Objective-C. Doing this in C++ for
- // functions explicitly declared as taking (...) is bad.
- if (TYPE_ARG_TYPES(TREE_TYPE(FnDecl)) == 0 && flag_vararg_requires_arguments)
- getFunctionTypeFromArgList = true;
-
- // When forcing vararg prototypes ensure that the function only gets a varargs
- // part if it was originally declared varargs.
- if (flag_force_vararg_prototypes) {
- tree Args = TYPE_ARG_TYPES(TREE_TYPE(FnDecl));
- while (Args && TREE_VALUE(Args) != void_type_node)
- Args = TREE_CHAIN(Args);
- if (Args != 0)
- getFunctionTypeFromArgList = true;
- }
-
- if (getFunctionTypeFromArgList)
- FTy = TheTypeConverter->ConvertArgListToFnType(TREE_TYPE(FnDecl),
- DECL_ARGUMENTS(FnDecl),
- static_chain,
- CallingConv, PAL);
- else
- // Otherwise, just get the type from the function itself.
- FTy = TheTypeConverter->ConvertFunctionType(TREE_TYPE(FnDecl),
- FnDecl,
- static_chain,
- CallingConv, PAL);
-
- // If we've already seen this function and created a prototype, and if the
- // proto has the right LLVM type, just use it.
- if (DECL_LOCAL_SET_P(FnDecl) &&
- cast<PointerType>(DECL_LOCAL(FnDecl)->getType())->getElementType()==FTy) {
- Fn = cast<Function>(DECL_LOCAL(FnDecl));
- assert(Fn->getCallingConv() == CallingConv &&
- "Calling convention disagreement between prototype and impl!");
- // The visibility can be changed from the last time we've seen this
- // function. Set to current.
- handleVisibility(FnDecl, Fn);
- } else {
- Function *FnEntry = TheModule->getFunction(Name);
- if (FnEntry) {
- assert(FnEntry->getName() == Name && "Same entry, different name?");
- assert((FnEntry->isDeclaration() ||
- FnEntry->getLinkage() == Function::AvailableExternallyLinkage) &&
- "Multiple fns with same name and neither are external!");
- FnEntry->setName(""); // Clear name to avoid conflicts.
- assert(FnEntry->getCallingConv() == CallingConv &&
- "Calling convention disagreement between prototype and impl!");
- }
-
- // Otherwise, either it exists with the wrong type or it doesn't exist. In
- // either case create a new function.
- Fn = Function::Create(FTy, Function::ExternalLinkage, Name, TheModule);
- assert(Fn->getName() == Name && "Preexisting fn with the same name!");
- Fn->setCallingConv(CallingConv);
- Fn->setAttributes(PAL);
-
- // If a previous proto existed with the wrong type, replace any uses of it
- // with the actual function and delete the proto.
- if (FnEntry) {
- FnEntry->replaceAllUsesWith
- (TheFolder->CreateBitCast(Fn, FnEntry->getType()));
- changeLLVMConstant(FnEntry, Fn);
- FnEntry->eraseFromParent();
- }
- SET_DECL_LOCAL(FnDecl, Fn);
- }
-
- // The function should not already have a body.
- assert(Fn->empty() && "Function expanded multiple times!");
-
- // Compute the linkage that the function should get.
- if (false) {//FIXME DECL_LLVM_PRIVATE(FnDecl)) {
- Fn->setLinkage(Function::PrivateLinkage);
- } else if (false) {//FIXME DECL_LLVM_LINKER_PRIVATE(FnDecl)) {
- Fn->setLinkage(Function::LinkerPrivateLinkage);
- } else if (!TREE_PUBLIC(FnDecl) /*|| lang_hooks.llvm_is_in_anon(subr)*/) {
- Fn->setLinkage(Function::InternalLinkage);
- } else if (DECL_COMDAT(FnDecl)) {
- Fn->setLinkage(Function::getLinkOnceLinkage(flag_odr));
- } else if (DECL_WEAK(FnDecl)) {
- // The user may have explicitly asked for weak linkage - ignore flag_odr.
- Fn->setLinkage(Function::WeakAnyLinkage);
- } else if (DECL_ONE_ONLY(FnDecl)) {
- Fn->setLinkage(Function::getWeakLinkage(flag_odr));
- } else if (DECL_EXTERNAL(FnDecl)) {
- Fn->setLinkage(Function::AvailableExternallyLinkage);
- }
-
-#ifdef TARGET_ADJUST_LLVM_LINKAGE
- TARGET_ADJUST_LLVM_LINKAGE(Fn,FnDecl);
-#endif /* TARGET_ADJUST_LLVM_LINKAGE */
-
- // Handle visibility style
- handleVisibility(FnDecl, Fn);
-
- // Register constructors and destructors.
- if (DECL_STATIC_CONSTRUCTOR(FnDecl))
- register_ctor_dtor(Fn, DECL_INIT_PRIORITY(FnDecl), true);
- if (DECL_STATIC_DESTRUCTOR(FnDecl))
- register_ctor_dtor(Fn, DECL_FINI_PRIORITY(FnDecl), false);
-
- // Handle attribute "aligned".
- if (DECL_ALIGN (FnDecl) != FUNCTION_BOUNDARY)
- Fn->setAlignment(DECL_ALIGN (FnDecl) / 8);
-
- // Handle functions in specified sections.
- if (DECL_SECTION_NAME(FnDecl))
- Fn->setSection(TREE_STRING_POINTER(DECL_SECTION_NAME(FnDecl)));
-
- // Handle used Functions
- if (lookup_attribute ("used", DECL_ATTRIBUTES (FnDecl)))
- AttributeUsedGlobals.insert(Fn);
-
- // Handle noinline Functions
- if (lookup_attribute ("noinline", DECL_ATTRIBUTES (FnDecl)))
- Fn->addFnAttr(Attribute::NoInline);
-
- // Handle always_inline attribute
- if (lookup_attribute ("always_inline", DECL_ATTRIBUTES (FnDecl)))
- Fn->addFnAttr(Attribute::AlwaysInline);
-
- // Pass inline keyword to optimizer.
- if (DECL_DECLARED_INLINE_P(FnDecl))
- Fn->addFnAttr(Attribute::InlineHint);
-
- if (optimize_size)
- Fn->addFnAttr(Attribute::OptimizeForSize);
-
- // Handle stack smashing protection.
- if (flag_stack_protect == 1)
- Fn->addFnAttr(Attribute::StackProtect);
- else if (flag_stack_protect == 2)
- Fn->addFnAttr(Attribute::StackProtectReq);
-
- // Handle naked attribute
- if (lookup_attribute ("naked", DECL_ATTRIBUTES (FnDecl)))
- Fn->addFnAttr(Attribute::Naked);
-
- // Handle annotate attributes
- if (DECL_ATTRIBUTES(FnDecl))
- AddAnnotateAttrsToGlobal(Fn, FnDecl);
-
- // Mark the function "nounwind" if not doing exception handling.
- if (!flag_exceptions)
- Fn->setDoesNotThrow();
-
- // Create a new basic block for the function.
- BasicBlock *EntryBlock = BasicBlock::Create(Context, "entry", Fn);
- BasicBlocks[ENTRY_BLOCK_PTR] = EntryBlock;
- Builder.SetInsertPoint(EntryBlock);
-
- if (EmitDebugInfo())
- TheDebugInfo->EmitFunctionStart(FnDecl, Fn);
-
- // Loop over all of the arguments to the function, setting Argument names and
- // creating argument alloca's for the PARM_DECLs in case their address is
- // exposed.
- Function::arg_iterator AI = Fn->arg_begin();
-
- // Rename and alloca'ify real arguments.
- FunctionPrologArgumentConversion Client(FnDecl, AI, Builder, CallingConv);
- DefaultABI ABIConverter(Client);
-
- // Handle the DECL_RESULT.
- ABIConverter.HandleReturnType(TREE_TYPE(TREE_TYPE(FnDecl)), FnDecl,
- DECL_BUILT_IN(FnDecl));
- // Remember this for use by FinishFunctionBody.
- TheTreeToLLVM->ReturnOffset = Client.Offset;
-
- // Prepend the static chain (if any) to the list of arguments.
- tree Args = static_chain ? static_chain : DECL_ARGUMENTS(FnDecl);
-
- // Scalar arguments processed so far.
- std::vector<const Type*> ScalarArgs;
- while (Args) {
- const char *Name = "unnamed_arg";
- if (DECL_NAME(Args)) Name = IDENTIFIER_POINTER(DECL_NAME(Args));
-
- const Type *ArgTy = ConvertType(TREE_TYPE(Args));
- bool isInvRef = isPassedByInvisibleReference(TREE_TYPE(Args));
- if (isInvRef ||
- (ArgTy->isVectorTy() &&
- LLVM_SHOULD_PASS_VECTOR_USING_BYVAL_ATTR(TREE_TYPE(Args)) &&
- !LLVM_BYVAL_ALIGNMENT_TOO_SMALL(TREE_TYPE(Args))) ||
- (!ArgTy->isSingleValueType() &&
- isPassedByVal(TREE_TYPE(Args), ArgTy, ScalarArgs,
- Client.isShadowReturn(), CallingConv) &&
- !LLVM_BYVAL_ALIGNMENT_TOO_SMALL(TREE_TYPE(Args)))) {
- // If the value is passed by 'invisible reference' or 'byval reference',
- // the l-value for the argument IS the argument itself. But for byval
- // arguments whose alignment as an argument is less than the normal
- // alignment of the type (examples are x86-32 aggregates containing long
- // double and large x86-64 vectors), we need to make the copy.
- AI->setName(Name);
- SET_DECL_LOCAL(Args, AI);
- if (!isInvRef && EmitDebugInfo())
- TheDebugInfo->EmitDeclare(Args, dwarf::DW_TAG_arg_variable,
- Name, TREE_TYPE(Args),
- AI, Builder);
- ABIConverter.HandleArgument(TREE_TYPE(Args), ScalarArgs);
- } else {
- // Otherwise, we create an alloca to hold the argument value and provide
- // an l-value. On entry to the function, we copy formal argument values
- // into the alloca.
- Value *Tmp = CreateTemporary(ArgTy, TYPE_ALIGN_UNIT(TREE_TYPE(Args)));
- Tmp->setName(std::string(Name)+"_addr");
- SET_DECL_LOCAL(Args, Tmp);
- if (EmitDebugInfo()) {
- TheDebugInfo->EmitDeclare(Args, dwarf::DW_TAG_arg_variable,
- Name, TREE_TYPE(Args), Tmp,
- Builder);
- }
-
- // Emit annotate intrinsic if arg has annotate attr
- if (DECL_ATTRIBUTES(Args))
- EmitAnnotateIntrinsic(Tmp, Args);
-
- // Emit gcroot intrinsic if arg has attribute
- if (POINTER_TYPE_P(TREE_TYPE(Args))
- && lookup_attribute ("gcroot", TYPE_ATTRIBUTES(TREE_TYPE(Args))))
- EmitTypeGcroot(Tmp);
-
- Client.setName(Name);
- Client.setLocation(Tmp);
- ABIConverter.HandleArgument(TREE_TYPE(Args), ScalarArgs);
- Client.clear();
- }
-
- Args = Args == static_chain ? DECL_ARGUMENTS(FnDecl) : TREE_CHAIN(Args);
- }
-
- // Loading the value of a PARM_DECL at this point yields its initial value.
- // Remember this for use when materializing the reads implied by SSA default
- // definitions.
- SSAInsertionPoint = Builder.Insert(CastInst::Create(Instruction::BitCast,
- Constant::getNullValue(Type::getInt32Ty(Context)),
- Type::getInt32Ty(Context)), "ssa point");
-
- // If this function has nested functions, we should handle a potential
- // nonlocal_goto_save_area.
- if (cfun->nonlocal_goto_save_area) {
- // Not supported yet.
- }
-
- if (EmitDebugInfo())
- TheDebugInfo->EmitStopPoint(Builder.GetInsertBlock(), Builder);
-
- // Create a new block for the return node, but don't insert it yet.
- ReturnBB = BasicBlock::Create(Context, "return");
-}
-
-/// DefineSSAName - Use the given value as the definition of the given SSA name.
-/// Returns the provided value as a convenience.
-Value *TreeToLLVM::DefineSSAName(tree reg, Value *Val) {
- assert(TREE_CODE(reg) == SSA_NAME && "Not an SSA name!");
- if (Value *ExistingValue = SSANames[reg]) {
- if (Val != ExistingValue) {
- assert(isSSAPlaceholder(ExistingValue) && "Multiply defined SSA name!");
- // Replace the placeholder with the value everywhere. This also updates
- // the map entry, because it is a TrackingVH.
- ExistingValue->replaceAllUsesWith(Val);
- delete ExistingValue;
- }
- return Val;
- }
- return SSANames[reg] = Val;
-}
-
-typedef SmallVector<std::pair<BasicBlock*, unsigned>, 8> PredVector;
-typedef SmallVector<std::pair<BasicBlock*, tree>, 8> TreeVector;
-typedef SmallVector<std::pair<BasicBlock*, Value*>, 8> ValueVector;
-
-/// PopulatePhiNodes - Populate generated phi nodes with their operands.
-void TreeToLLVM::PopulatePhiNodes() {
- PredVector Predecessors;
- TreeVector IncomingValues;
- ValueVector PhiArguments;
-
- for (unsigned i = 0, e = PendingPhis.size(); i < e; ++i) {
- // The phi node to process.
- PhiRecord &P = PendingPhis[i];
-
- // Extract the incoming value for each predecessor from the GCC phi node.
- for (size_t i = 0, e = gimple_phi_num_args(P.gcc_phi); i != e; ++i) {
- // The incoming GCC basic block.
- basic_block bb = gimple_phi_arg_edge(P.gcc_phi, i)->src;
-
- // The corresponding LLVM basic block.
- DenseMap<basic_block, BasicBlock*>::iterator BI = BasicBlocks.find(bb);
- assert(BI != BasicBlocks.end() && "GCC basic block not output?");
-
- // The incoming GCC expression.
- tree val = gimple_phi_arg(P.gcc_phi, i)->def;
-
- // Associate it with the LLVM basic block.
- IncomingValues.push_back(std::make_pair(BI->second, val));
-
- // Several LLVM basic blocks may be generated when emitting one GCC basic
- // block. The additional blocks always occur immediately after the main
- // basic block, and can be identified by the fact that they are nameless.
- // Associate the incoming expression with all of them, since any of them
- // may occur as a predecessor of the LLVM basic block containing the phi.
- Function::iterator FI(BI->second), FE = Fn->end();
- for (++FI; FI != FE && !FI->hasName(); ++FI) {
- assert(FI->getSinglePredecessor() == IncomingValues.back().first &&
- "Anonymous block does not continue predecessor!");
- IncomingValues.push_back(std::make_pair(FI, val));
- }
- }
-
- // Sort the incoming values by basic block to help speed up queries.
- std::sort(IncomingValues.begin(), IncomingValues.end());
-
- // Get the LLVM predecessors for the basic block containing the phi node,
- // and remember their positions in the list of predecessors (this is used
- // to avoid adding phi operands in a non-deterministic order).
- Predecessors.reserve(gimple_phi_num_args(P.gcc_phi)); // At least this many.
- BasicBlock *PhiBB = P.PHI->getParent();
- unsigned Index = 0;
- for (pred_iterator PI = pred_begin(PhiBB), PE = pred_end(PhiBB); PI != PE;
- ++PI, ++Index)
- Predecessors.push_back(std::make_pair(*PI, Index));
-
- if (Predecessors.empty()) {
- // FIXME: If this happens then GCC has a control flow edge where LLVM has
- // none - something has gone wrong. For the moment be laid back about it
- // because the fact we don't yet wire up exception handling code means it
- // happens all the time in Ada and C++.
- P.PHI->replaceAllUsesWith(UndefValue::get(P.PHI->getType()));
- P.PHI->eraseFromParent();
- IncomingValues.clear();
- continue;
- }
-
- // Sort the predecessors by basic block. In GCC, each predecessor occurs
- // exactly once. However in LLVM a predecessor can occur several times,
- // and then every copy of the predecessor must be associated with exactly
- // the same incoming value in the phi node. Sorting the predecessors groups
- // multiple occurrences together, making this easy to handle.
- std::sort(Predecessors.begin(), Predecessors.end());
-
- // Now iterate over the predecessors, setting phi operands as we go.
- TreeVector::iterator VI = IncomingValues.begin(), VE = IncomingValues.end();
- PredVector::iterator PI = Predecessors.begin(), PE = Predecessors.end();
- PhiArguments.resize(Predecessors.size());
- while (PI != PE) {
- // The predecessor basic block.
- BasicBlock *BB = PI->first;
-
- // Find the incoming value for this predecessor.
- while (VI != VE && VI->first != BB) ++VI;
- assert(VI != VE && "No value for predecessor!");
- Value *Val = EmitRegister(VI->second);
-
- // Need to bitcast to the right type (useless_type_conversion_p). Place
- // the bitcast at the end of the predecessor, before the terminator.
- if (Val->getType() != P.PHI->getType())
- Val = new BitCastInst(Val, P.PHI->getType(), "", BB->getTerminator());
-
- // Add the phi node arguments for all occurrences of this predecessor.
- do {
- // Place the argument at the position given by PI->second, which is the
- // original position before sorting of the predecessor in the pred list.
- // Since the predecessors were sorted non-deterministically (by pointer
- // value), this ensures that the same bitcode is produced on any run.
- PhiArguments[PI++->second] = std::make_pair(BB, Val);
- } while (PI != PE && PI->first == BB);
- }
-
- // Add the operands to the phi node.
- P.PHI->reserveOperandSpace(PhiArguments.size());
- for (ValueVector::iterator I = PhiArguments.begin(), E = PhiArguments.end();
- I != E; ++I)
- P.PHI->addIncoming(I->second, I->first);
-
- IncomingValues.clear();
- PhiArguments.clear();
- Predecessors.clear();
- }
-
- PendingPhis.clear();
-}
-
-Function *TreeToLLVM::FinishFunctionBody() {
- // Insert the return block at the end of the function.
- BeginBlock(ReturnBB);
-
- SmallVector <Value *, 4> RetVals;
-
- // If the function returns a value, get it into a register and return it now.
- if (!Fn->getReturnType()->isVoidTy()) {
- tree TreeRetVal = DECL_RESULT(FnDecl);
- if (!AGGREGATE_TYPE_P(TREE_TYPE(TreeRetVal)) &&
- TREE_CODE(TREE_TYPE(TreeRetVal)) != COMPLEX_TYPE) {
- // If the DECL_RESULT is a scalar type, just load out the return value
- // and return it.
- Value *RetVal = Builder.CreateLoad(DECL_LOCAL(TreeRetVal), "retval");
- RetVal = Builder.CreateBitCast(RetVal, Fn->getReturnType());
- RetVals.push_back(RetVal);
- } else {
- Value *RetVal = DECL_LOCAL(TreeRetVal);
- if (const StructType *STy = dyn_cast<StructType>(Fn->getReturnType())) {
- Value *R1 = Builder.CreateBitCast(RetVal, STy->getPointerTo());
-
- llvm::Value *Idxs[2];
- Idxs[0] = ConstantInt::get(llvm::Type::getInt32Ty(Context), 0);
- for (unsigned ri = 0; ri < STy->getNumElements(); ++ri) {
- Idxs[1] = ConstantInt::get(llvm::Type::getInt32Ty(Context), ri);
- Value *GEP = Builder.CreateGEP(R1, Idxs, Idxs+2, "mrv_gep");
- Value *E = Builder.CreateLoad(GEP, "mrv");
- RetVals.push_back(E);
- }
- } else {
- // Otherwise, this aggregate result must be something that is returned
- // in a scalar register for this target. We must bit convert the
- // aggregate to the specified scalar type, which we do by casting the
- // pointer and loading. The load does not necessarily start at the
- // beginning of the aggregate (x86-64).
- if (ReturnOffset) {
- RetVal = Builder.CreateBitCast(RetVal, Type::getInt8PtrTy(Context));
- RetVal = Builder.CreateGEP(RetVal,
- ConstantInt::get(TD.getIntPtrType(Context), ReturnOffset));
- }
- RetVal = Builder.CreateBitCast(RetVal,
- Fn->getReturnType()->getPointerTo());
- RetVal = Builder.CreateLoad(RetVal, "retval");
- RetVals.push_back(RetVal);
- }
- }
- }
- if (RetVals.empty())
- Builder.CreateRetVoid();
- else if (RetVals.size() == 1 && RetVals[0]->getType() == Fn->getReturnType()){
- Builder.CreateRet(RetVals[0]);
- } else {
- assert(Fn->getReturnType()->isAggregateType() && "Return type mismatch!");
- Builder.CreateAggregateRet(RetVals.data(), RetVals.size());
- }
-
- // Populate phi nodes with their operands now that all ssa names have been
- // defined and all basic blocks output.
- PopulatePhiNodes();
-
- // Now that phi nodes have been output, emit pending exception handling code.
- EmitLandingPads();
- EmitFailureBlocks();
- EmitRewindBlock();
-
- if (EmitDebugInfo()) {
- // FIXME: This should be output just before the return call generated above.
- // But because EmitFunctionEnd pops the region stack, that means that if the
- // call to PopulatePhiNodes (for example) generates complicated debug info,
- // then the debug info logic barfs. Testcases showing this are 20011126-2.c
- // or pr42221.c from the gcc testsuite compiled with -g -O3.
- TheDebugInfo->EmitStopPoint(ReturnBB, Builder);
- TheDebugInfo->EmitFunctionEnd(true);
- }
-
-#ifdef NDEBUG
- // When processing broken code it can be awkward to ensure that every SSA name
- // that was used has a definition. So in this case we play it cool and create
- // an artificial definition for such SSA names. The choice of definition does
- // not matter because the compiler is going to exit with an error anyway.
- if (errorcount || sorrycount)
-#else
- // When checks are enabled, complain if an SSA name was used but not defined.
-#endif
- for (DenseMap<tree,TrackingVH<Value> >::const_iterator I = SSANames.begin(),
- E = SSANames.end(); I != E; ++I) {
- Value *NameDef = I->second;
- // If this is not a placeholder then the SSA name was defined.
- if (!isSSAPlaceholder(NameDef))
- continue;
-
- // If an error occurred then replace the placeholder with undef. Thanks
- // to this we can just bail out on errors, without having to worry about
- // whether we defined every SSA name.
- if (errorcount || sorrycount) {
- NameDef->replaceAllUsesWith(UndefValue::get(NameDef->getType()));
- delete NameDef;
- } else {
- debug_tree(I->first);
- llvm_unreachable("SSA name never defined!");
- }
- }
-
- return Fn;
-}
-
-/// getBasicBlock - Find or create the LLVM basic block corresponding to BB.
-BasicBlock *TreeToLLVM::getBasicBlock(basic_block bb) {
- // If we already associated an LLVM basic block with BB, then return it.
- DenseMap<basic_block, BasicBlock*>::iterator I = BasicBlocks.find(bb);
- if (I != BasicBlocks.end())
- return I->second;
-
- // Otherwise, create a new LLVM basic block.
- BasicBlock *BB = BasicBlock::Create(Context);
-
- // All basic blocks that directly correspond to GCC basic blocks (those
- // created here) must have a name. All artificial basic blocks produced
- // while generating code must be nameless. That way, artificial blocks
- // can be easily identified.
-
- // Give the basic block a name. If the user specified -fverbose-asm then
- // use the same naming scheme as GCC.
- if (flag_verbose_asm) {
- // If BB contains labels, name the LLVM basic block after the first label.
- gimple stmt = first_stmt(bb);
- if (stmt && gimple_code(stmt) == GIMPLE_LABEL) {
- tree label = gimple_label_label(stmt);
- const std::string &LabelName = getDescriptiveName(label);
- if (!LabelName.empty())
- BB->setName("<" + LabelName + ">");
- } else {
- // When there is no label, use the same name scheme as the GCC tree dumps.
- Twine Index(bb->index);
- BB->setName("<bb " + Index + ">");
- }
- } else {
- Twine Index(bb->index);
- BB->setName(Index);
- }
-
- return BasicBlocks[bb] = BB;
-}
-
-/// getLabelDeclBlock - Lazily get and create a basic block for the specified
-/// label.
-BasicBlock *TreeToLLVM::getLabelDeclBlock(tree LabelDecl) {
- assert(TREE_CODE(LabelDecl) == LABEL_DECL && "Isn't a label!?");
- if (DECL_LOCAL_SET_P(LabelDecl))
- return cast<BasicBlock>(DECL_LOCAL(LabelDecl));
-
- basic_block bb = label_to_block(LabelDecl);
- if (!bb) {
- sorry("address of a non-local label");
- bb = ENTRY_BLOCK_PTR; // Do not crash.
- }
-
- BasicBlock *BB = getBasicBlock(bb);
- SET_DECL_LOCAL(LabelDecl, BB);
- return BB;
-}
-
-void TreeToLLVM::EmitBasicBlock(basic_block bb) {
- ++NumBasicBlocks;
-
- // Avoid outputting a pointless branch at the end of the entry block.
- if (bb != ENTRY_BLOCK_PTR)
- BeginBlock(getBasicBlock(bb));
-
- // Create an LLVM phi node for each GCC phi and define the associated ssa name
- // using it. Do not populate with operands at this point since some ssa names
- // the phi uses may not have been defined yet - phis are special this way.
- for (gimple_stmt_iterator gsi = gsi_start_phis(bb); !gsi_end_p(gsi);
- gsi_next(&gsi)) {
- gimple gcc_phi = gsi_stmt(gsi);
- // Skip virtual operands.
- if (!is_gimple_reg(gimple_phi_result(gcc_phi)))
- continue;
-
- // Create the LLVM phi node.
- const Type *Ty = GetRegType(TREE_TYPE(gimple_phi_result(gcc_phi)));
- PHINode *PHI = Builder.CreatePHI(Ty);
-
- // The phi defines the associated ssa name.
- tree name = gimple_phi_result(gcc_phi);
- assert(TREE_CODE(name) == SSA_NAME && "PHI result not an SSA name!");
- if (flag_verbose_asm)
- NameValue(PHI, name);
- DefineSSAName(name, PHI);
-
- // The phi operands will be populated later - remember the phi node.
- PhiRecord P = { gcc_phi, PHI };
- PendingPhis.push_back(P);
- }
-
- // Render statements.
- for (gimple_stmt_iterator gsi = gsi_start_bb(bb); !gsi_end_p(gsi);
- gsi_next(&gsi)) {
- gimple stmt = gsi_stmt(gsi);
- ++NumStatements;
-
- if (EmitDebugInfo()) {
- if (gimple_has_location(stmt)) {
- TheDebugInfo->setLocationFile(gimple_filename(stmt));
- TheDebugInfo->setLocationLine(gimple_lineno(stmt));
- } else {
- TheDebugInfo->setLocationFile("");
- TheDebugInfo->setLocationLine(0);
- }
- TheDebugInfo->EmitStopPoint(Builder.GetInsertBlock(), Builder);
- }
-
- switch (gimple_code(stmt)) {
- case GIMPLE_ASM:
- RenderGIMPLE_ASM(stmt);
- break;
-
- case GIMPLE_ASSIGN:
- RenderGIMPLE_ASSIGN(stmt);
- break;
-
- case GIMPLE_CALL:
- RenderGIMPLE_CALL(stmt);
- break;
-
- case GIMPLE_COND:
- RenderGIMPLE_COND(stmt);
- break;
-
- case GIMPLE_DEBUG:
- // TODO: Output debug info rather than just discarding it.
- break;
-
- case GIMPLE_EH_DISPATCH:
- RenderGIMPLE_EH_DISPATCH(stmt);
- break;
-
- case GIMPLE_GOTO:
- RenderGIMPLE_GOTO(stmt);
- break;
-
- case GIMPLE_LABEL:
- case GIMPLE_NOP:
- case GIMPLE_PREDICT:
- break;
-
- case GIMPLE_RESX:
- RenderGIMPLE_RESX(stmt);
- break;
-
- case GIMPLE_RETURN:
- RenderGIMPLE_RETURN(stmt);
- break;
-
- case GIMPLE_SWITCH:
- RenderGIMPLE_SWITCH(stmt);
- break;
-
- default:
- dump(stmt);
- llvm_unreachable("Unhandled GIMPLE statement during LLVM emission!");
- }
- }
-
- if (EmitDebugInfo()) {
- TheDebugInfo->setLocationFile("");
- TheDebugInfo->setLocationLine(0);
- TheDebugInfo->EmitStopPoint(Builder.GetInsertBlock(), Builder);
- }
-
- // Add a branch to the fallthru block.
- edge e;
- edge_iterator ei;
- FOR_EACH_EDGE (e, ei, bb->succs)
- if (e->flags & EDGE_FALLTHRU) {
- Builder.CreateBr(getBasicBlock(e->dest));
- break;
- }
-}
-
-Function *TreeToLLVM::EmitFunction() {
- // Set up parameters and prepare for return, for the function.
- StartFunctionBody();
-
- // Output the basic blocks.
- basic_block bb;
- FOR_EACH_BB(bb)
- EmitBasicBlock(bb);
-
- // Wrap things up.
- return FinishFunctionBody();
-}
-
-/// EmitAggregate - Store the specified tree node into the location given by
-/// DestLoc.
-void TreeToLLVM::EmitAggregate(tree exp, const MemRef &DestLoc) {
- assert(AGGREGATE_TYPE_P(TREE_TYPE(exp)) && "Expected an aggregate type!");
- if (TREE_CODE(exp) == CONSTRUCTOR) {
- EmitCONSTRUCTOR(exp, &DestLoc);
- return;
- }
- LValue LV = EmitLV(exp);
- assert(!LV.isBitfield() && "Bitfields containing aggregates not supported!");
- EmitAggregateCopy(DestLoc, MemRef(LV.Ptr, LV.getAlignment(),
- TREE_THIS_VOLATILE(exp)), TREE_TYPE(exp));
-}
-
-/// get_constant_alignment - Return the alignment of constant EXP in bits.
-///
-static unsigned int
-get_constant_alignment (tree exp)
-{
- unsigned int align = TYPE_ALIGN (TREE_TYPE (exp));
-#ifdef CONSTANT_ALIGNMENT
- align = CONSTANT_ALIGNMENT (exp, align);
-#endif
- return align;
-}
-
-/// EmitLV - Convert the specified l-value tree node to LLVM code, returning
-/// the address of the result.
-LValue TreeToLLVM::EmitLV(tree exp) {
- LValue LV;
-
- switch (TREE_CODE(exp)) {
- default:
- debug_tree(exp);
- llvm_unreachable("Unhandled lvalue expression!");
-
- case PARM_DECL:
- case VAR_DECL:
- case FUNCTION_DECL:
- case CONST_DECL:
- case RESULT_DECL:
- LV = EmitLV_DECL(exp);
- break;
- case ARRAY_RANGE_REF:
- case ARRAY_REF:
- LV = EmitLV_ARRAY_REF(exp);
- break;
- case COMPONENT_REF:
- LV = EmitLV_COMPONENT_REF(exp);
- break;
- case BIT_FIELD_REF:
- LV = EmitLV_BIT_FIELD_REF(exp);
- break;
- case REALPART_EXPR:
- LV = EmitLV_XXXXPART_EXPR(exp, 0);
- break;
- case IMAGPART_EXPR:
- LV = EmitLV_XXXXPART_EXPR(exp, 1);
- break;
- case SSA_NAME:
- LV = EmitLV_SSA_NAME(exp);
- break;
- case TARGET_MEM_REF:
- LV = EmitLV_TARGET_MEM_REF(exp);
- break;
-
- // Constants.
- case LABEL_DECL: {
- LV = LValue(EmitLV_LABEL_DECL(exp), 1);
- break;
- }
- case COMPLEX_CST:
- case REAL_CST:
- case STRING_CST: {
- Value *Ptr = EmitAddressOf(exp);
- LV = LValue(Ptr, get_constant_alignment(exp) / 8);
- break;
- }
-
- // Type Conversion.
- case VIEW_CONVERT_EXPR:
- LV = EmitLV_VIEW_CONVERT_EXPR(exp);
- break;
-
- // Trivial Cases.
- case WITH_SIZE_EXPR:
- LV = EmitLV_WITH_SIZE_EXPR(exp);
- break;
- case INDIRECT_REF:
- LV = EmitLV_INDIRECT_REF(exp);
- break;
- }
-
- // Check that the type of the lvalue is indeed that of a pointer to the tree
- // node. This may not hold for bitfields because the type of a bitfield need
- // not match the type of the value being loaded out of it. Since LLVM has no
- // void* type, don't insist that void* be converted to a specific LLVM type.
- assert((LV.isBitfield() || VOID_TYPE_P(TREE_TYPE(exp)) ||
- LV.Ptr->getType() == ConvertType(TREE_TYPE(exp))->getPointerTo()) &&
- "LValue has wrong type!");
-
- return LV;
-}
-
-//===----------------------------------------------------------------------===//
-// ... Utility Functions ...
-//===----------------------------------------------------------------------===//
-
-void TreeToLLVM::TODO(tree exp) {
- if (exp) debug_tree(exp);
- llvm_unreachable("Unhandled tree node");
-}
-
-/// CastToAnyType - Cast the specified value to the specified type making no
-/// assumptions about the types of the arguments. This creates an inferred cast.
-Value *TreeToLLVM::CastToAnyType(Value *V, bool VisSigned,
- const Type* Ty, bool TyIsSigned) {
- // Eliminate useless casts of a type to itself.
- if (V->getType() == Ty)
- return V;
-
- // The types are different so we must cast. Use getCastOpcode to create an
- // inferred cast opcode.
- Instruction::CastOps opc =
- CastInst::getCastOpcode(V, VisSigned, Ty, TyIsSigned);
-
- // Generate the cast and return it.
- return Builder.CreateCast(opc, V, Ty);
-}
-
-/// CastToFPType - Cast the specified value to the specified type assuming
-/// that the value and type are floating point.
-Value *TreeToLLVM::CastToFPType(Value *V, const Type* Ty) {
- unsigned SrcBits = V->getType()->getPrimitiveSizeInBits();
- unsigned DstBits = Ty->getPrimitiveSizeInBits();
- if (SrcBits == DstBits)
- return V;
- Instruction::CastOps opcode = (SrcBits > DstBits ?
- Instruction::FPTrunc : Instruction::FPExt);
- return Builder.CreateCast(opcode, V, Ty);
-}
-
-/// CreateAnyAdd - Add two LLVM scalar values with the given GCC type. Does not
-/// support complex numbers. The type is used to set overflow flags.
-Value *TreeToLLVM::CreateAnyAdd(Value *LHS, Value *RHS, tree type) {
- if (FLOAT_TYPE_P(type))
- return Builder.CreateFAdd(LHS, RHS);
- return Builder.CreateAdd(LHS, RHS, "", hasNUW(type), hasNSW(type));
-}
-
-/// CreateAnyMul - Multiply two LLVM scalar values with the given GCC type.
-/// Does not support complex numbers. The type is used to set overflow flags.
-Value *TreeToLLVM::CreateAnyMul(Value *LHS, Value *RHS, tree type) {
- if (FLOAT_TYPE_P(type))
- return Builder.CreateFMul(LHS, RHS);
- return Builder.CreateMul(LHS, RHS, "", hasNUW(type), hasNSW(type));
-}
-
-/// CreateAnyNeg - Negate an LLVM scalar value with the given GCC type. Does
-/// not support complex numbers. The type is used to set overflow flags.
-Value *TreeToLLVM::CreateAnyNeg(Value *V, tree type) {
- if (FLOAT_TYPE_P(type))
- return Builder.CreateFNeg(V);
- return Builder.CreateNeg(V, "", hasNUW(type), hasNSW(type));
-}
-
-/// CreateAnySub - Subtract two LLVM scalar values with the given GCC type.
-/// Does not support complex numbers. The type is used to set overflow flags.
-Value *TreeToLLVM::CreateAnySub(Value *LHS, Value *RHS, tree type) {
- if (FLOAT_TYPE_P(type))
- return Builder.CreateFSub(LHS, RHS);
- return Builder.CreateSub(LHS, RHS, "", hasNUW(type), hasNSW(type));
-}
-
-/// CreateTemporary - Create a new alloca instruction of the specified type,
-/// inserting it into the entry block and returning it. The resulting
-/// instruction's type is a pointer to the specified type.
-AllocaInst *TreeToLLVM::CreateTemporary(const Type *Ty, unsigned align) {
- if (AllocaInsertionPoint == 0) {
- // Create a dummy instruction in the entry block as a marker to insert new
- // alloc instructions before. It doesn't matter what this instruction is,
- // it is dead. This allows us to insert allocas in order without having to
- // scan for an insertion point. Use BitCast for int -> int
- AllocaInsertionPoint = CastInst::Create(Instruction::BitCast,
- Constant::getNullValue(Type::getInt32Ty(Context)),
- Type::getInt32Ty(Context), "alloca point");
- // Insert it as the first instruction in the entry block.
- Fn->begin()->getInstList().insert(Fn->begin()->begin(),
- AllocaInsertionPoint);
- }
- return new AllocaInst(Ty, 0, align, "memtmp", AllocaInsertionPoint);
-}
-
-/// CreateTempLoc - Like CreateTemporary, but returns a MemRef.
-MemRef TreeToLLVM::CreateTempLoc(const Type *Ty) {
- AllocaInst *AI = CreateTemporary(Ty);
- // MemRefs do not allow alignment 0.
- if (!AI->getAlignment())
- AI->setAlignment(TD.getPrefTypeAlignment(Ty));
- return MemRef(AI, AI->getAlignment(), false);
-}
-
-/// BeginBlock - Add the specified basic block to the end of the function. If
-/// the previous block falls through into it, add an explicit branch.
-void TreeToLLVM::BeginBlock(BasicBlock *BB) {
- BasicBlock *CurBB = Builder.GetInsertBlock();
- // If the previous block falls through to BB, add an explicit branch.
- if (CurBB->getTerminator() == 0) {
- // If the previous block has no label and is empty, remove it: it is a
- // post-terminator block.
- if (CurBB->getName().empty() && CurBB->begin() == CurBB->end())
- CurBB->eraseFromParent();
- else
- // Otherwise, fall through to this block.
- Builder.CreateBr(BB);
- }
-
- // Add this block.
- Fn->getBasicBlockList().push_back(BB);
- Builder.SetInsertPoint(BB); // It is now the current block.
-}
-
-/// CopyAggregate - Recursively traverse the potientially aggregate src/dest
-/// ptrs, copying all of the elements.
-static void CopyAggregate(MemRef DestLoc, MemRef SrcLoc,
- LLVMBuilder &Builder, tree gccType) {
- assert(DestLoc.Ptr->getType() == SrcLoc.Ptr->getType() &&
- "Cannot copy between two pointers of different type!");
- const Type *ElTy =
- cast<PointerType>(DestLoc.Ptr->getType())->getElementType();
-
- unsigned Alignment = std::min(DestLoc.getAlignment(), SrcLoc.getAlignment());
-
- if (ElTy->isSingleValueType()) {
- LoadInst *V = Builder.CreateLoad(SrcLoc.Ptr, SrcLoc.Volatile);
- StoreInst *S = Builder.CreateStore(V, DestLoc.Ptr, DestLoc.Volatile);
- V->setAlignment(Alignment);
- S->setAlignment(Alignment);
- } else if (const StructType *STy = dyn_cast<StructType>(ElTy)) {
- const StructLayout *SL = getTargetData().getStructLayout(STy);
- for (unsigned i = 0, e = STy->getNumElements(); i != e; ++i) {
- if (gccType && isPaddingElement(gccType, i))
- continue;
- Value *DElPtr = Builder.CreateStructGEP(DestLoc.Ptr, i);
- Value *SElPtr = Builder.CreateStructGEP(SrcLoc.Ptr, i);
- unsigned Align = MinAlign(Alignment, SL->getElementOffset(i));
- CopyAggregate(MemRef(DElPtr, Align, DestLoc.Volatile),
- MemRef(SElPtr, Align, SrcLoc.Volatile),
- Builder, 0);
- }
- } else {
- const ArrayType *ATy = cast<ArrayType>(ElTy);
- unsigned EltSize = getTargetData().getTypeAllocSize(ATy->getElementType());
- for (unsigned i = 0, e = ATy->getNumElements(); i != e; ++i) {
- Value *DElPtr = Builder.CreateStructGEP(DestLoc.Ptr, i);
- Value *SElPtr = Builder.CreateStructGEP(SrcLoc.Ptr, i);
- unsigned Align = MinAlign(Alignment, i * EltSize);
- CopyAggregate(MemRef(DElPtr, Align, DestLoc.Volatile),
- MemRef(SElPtr, Align, SrcLoc.Volatile),
- Builder, 0);
- }
- }
-}
-
-/// CountAggregateElements - Return the number of elements in the specified type
-/// that will need to be loaded/stored if we copy this by explicit accesses.
-static unsigned CountAggregateElements(const Type *Ty) {
- if (Ty->isSingleValueType()) return 1;
-
- if (const StructType *STy = dyn_cast<StructType>(Ty)) {
- unsigned NumElts = 0;
- for (unsigned i = 0, e = STy->getNumElements(); i != e; ++i)
- NumElts += CountAggregateElements(STy->getElementType(i));
- return NumElts;
- } else {
- const ArrayType *ATy = cast<ArrayType>(Ty);
- return ATy->getNumElements()*CountAggregateElements(ATy->getElementType());
- }
-}
-
-/// containsFPField - indicates whether the given LLVM type
-/// contains any floating point elements.
-
-static bool containsFPField(const Type *LLVMTy) {
- if (LLVMTy->isFloatingPointTy())
- return true;
- const StructType* STy = dyn_cast<StructType>(LLVMTy);
- if (STy) {
- for (StructType::element_iterator I = STy->element_begin(),
- E = STy->element_end(); I != E; I++) {
- const Type *Ty = *I;
- if (Ty->isFloatingPointTy())
- return true;
- if (Ty->isStructTy() && containsFPField(Ty))
- return true;
- const ArrayType *ATy = dyn_cast<ArrayType>(Ty);
- if (ATy && containsFPField(ATy->getElementType()))
- return true;
- const VectorType *VTy = dyn_cast<VectorType>(Ty);
- if (VTy && containsFPField(VTy->getElementType()))
- return true;
- }
- }
- return false;
-}
-
-#ifndef TARGET_LLVM_MIN_BYTES_COPY_BY_MEMCPY
-#define TARGET_LLVM_MIN_BYTES_COPY_BY_MEMCPY 64
-#endif
-
-/// EmitAggregateCopy - Copy the elements from SrcLoc to DestLoc, using the
-/// GCC type specified by GCCType to know which elements to copy.
-void TreeToLLVM::EmitAggregateCopy(MemRef DestLoc, MemRef SrcLoc, tree type) {
- if (DestLoc.Ptr == SrcLoc.Ptr && !DestLoc.Volatile && !SrcLoc.Volatile)
- return; // noop copy.
-
- // If the type is small, copy the elements instead of using a block copy.
- const Type *LLVMTy = ConvertType(type);
- unsigned NumElts = CountAggregateElements(LLVMTy);
- if (TREE_CODE(TYPE_SIZE(type)) == INTEGER_CST &&
- (NumElts == 1 ||
- TREE_INT_CST_LOW(TYPE_SIZE_UNIT(type)) <
- TARGET_LLVM_MIN_BYTES_COPY_BY_MEMCPY)) {
-
- // Some targets (x87) cannot pass non-floating-point values using FP
- // instructions. The LLVM type for a union may include FP elements,
- // even if some of the union fields do not; it is unsafe to pass such
- // converted types element by element. PR 2680.
-
- // If the GCC type is not fully covered by the LLVM type, use memcpy. This
- // can occur with unions etc.
- if ((TREE_CODE(type) != UNION_TYPE || !containsFPField(LLVMTy)) &&
- !TheTypeConverter->GCCTypeOverlapsWithLLVMTypePadding(type, LLVMTy) &&
- // Don't copy tons of tiny elements.
- NumElts <= 8) {
- DestLoc.Ptr = Builder.CreateBitCast(DestLoc.Ptr, LLVMTy->getPointerTo());
- SrcLoc.Ptr = Builder.CreateBitCast(SrcLoc.Ptr, LLVMTy->getPointerTo());
- CopyAggregate(DestLoc, SrcLoc, Builder, type);
- return;
- }
- }
-
- Value *TypeSize = EmitRegister(TYPE_SIZE_UNIT(type));
- EmitMemCpy(DestLoc.Ptr, SrcLoc.Ptr, TypeSize,
- std::min(DestLoc.getAlignment(), SrcLoc.getAlignment()));
-}
-
-/// ZeroAggregate - Recursively traverse the potentially aggregate DestLoc,
-/// zero'ing all of the elements.
-static void ZeroAggregate(MemRef DestLoc, LLVMBuilder &Builder) {
- const Type *ElTy =
- cast<PointerType>(DestLoc.Ptr->getType())->getElementType();
- if (ElTy->isSingleValueType()) {
- StoreInst *St = Builder.CreateStore(Constant::getNullValue(ElTy),
- DestLoc.Ptr, DestLoc.Volatile);
- St->setAlignment(DestLoc.getAlignment());
- } else if (const StructType *STy = dyn_cast<StructType>(ElTy)) {
- const StructLayout *SL = getTargetData().getStructLayout(STy);
- for (unsigned i = 0, e = STy->getNumElements(); i != e; ++i) {
- Value *Ptr = Builder.CreateStructGEP(DestLoc.Ptr, i);
- unsigned Alignment = MinAlign(DestLoc.getAlignment(),
- SL->getElementOffset(i));
- ZeroAggregate(MemRef(Ptr, Alignment, DestLoc.Volatile), Builder);
- }
- } else {
- const ArrayType *ATy = cast<ArrayType>(ElTy);
- unsigned EltSize = getTargetData().getTypeAllocSize(ATy->getElementType());
- for (unsigned i = 0, e = ATy->getNumElements(); i != e; ++i) {
- Value *Ptr = Builder.CreateStructGEP(DestLoc.Ptr, i);
- unsigned Alignment = MinAlign(DestLoc.getAlignment(), i * EltSize);
- ZeroAggregate(MemRef(Ptr, Alignment, DestLoc.Volatile), Builder);
- }
- }
-}
-
-/// EmitAggregateZero - Zero the elements of DestLoc.
-void TreeToLLVM::EmitAggregateZero(MemRef DestLoc, tree type) {
- // If the type is small, copy the elements instead of using a block copy.
- if (TREE_CODE(TYPE_SIZE(type)) == INTEGER_CST &&
- TREE_INT_CST_LOW(TYPE_SIZE_UNIT(type)) < 128) {
- const Type *LLVMTy = ConvertType(type);
-
- // If the GCC type is not fully covered by the LLVM type, use memset. This
- // can occur with unions etc.
- if (!TheTypeConverter->GCCTypeOverlapsWithLLVMTypePadding(type, LLVMTy) &&
- // Don't zero tons of tiny elements.
- CountAggregateElements(LLVMTy) <= 8) {
- DestLoc.Ptr = Builder.CreateBitCast(DestLoc.Ptr, LLVMTy->getPointerTo());
- ZeroAggregate(DestLoc, Builder);
- return;
- }
- }
-
- EmitMemSet(DestLoc.Ptr, ConstantInt::get(Type::getInt8Ty(Context), 0),
- EmitRegister(TYPE_SIZE_UNIT(type)), DestLoc.getAlignment());
-}
-
-Value *TreeToLLVM::EmitMemCpy(Value *DestPtr, Value *SrcPtr, Value *Size,
- unsigned Align) {
- const Type *SBP = Type::getInt8PtrTy(Context);
- const Type *IntPtr = TD.getIntPtrType(Context);
- Value *Ops[5] = {
- Builder.CreateBitCast(DestPtr, SBP),
- Builder.CreateBitCast(SrcPtr, SBP),
- Builder.CreateIntCast(Size, IntPtr, /*isSigned*/true),
- ConstantInt::get(Type::getInt32Ty(Context), Align),
- ConstantInt::get(Type::getInt1Ty(Context), false)
- };
- const Type *ArgTypes[3] = { SBP, SBP, IntPtr };
-
- Builder.CreateCall(Intrinsic::getDeclaration(TheModule, Intrinsic::memcpy,
- ArgTypes, 3), Ops, Ops+5);
- return Ops[0];
-}
-
-Value *TreeToLLVM::EmitMemMove(Value *DestPtr, Value *SrcPtr, Value *Size,
- unsigned Align) {
- const Type *SBP = Type::getInt8PtrTy(Context);
- const Type *IntPtr = TD.getIntPtrType(Context);
- Value *Ops[5] = {
- Builder.CreateBitCast(DestPtr, SBP),
- Builder.CreateBitCast(SrcPtr, SBP),
- Builder.CreateIntCast(Size, IntPtr, /*isSigned*/true),
- ConstantInt::get(Type::getInt32Ty(Context), Align),
- ConstantInt::get(Type::getInt1Ty(Context), false)
- };
- const Type *ArgTypes[3] = { SBP, SBP, IntPtr };
-
- Builder.CreateCall(Intrinsic::getDeclaration(TheModule, Intrinsic::memmove,
- ArgTypes, 3), Ops, Ops+5);
- return Ops[0];
-}
-
-Value *TreeToLLVM::EmitMemSet(Value *DestPtr, Value *SrcVal, Value *Size,
- unsigned Align) {
- const Type *SBP = Type::getInt8PtrTy(Context);
- const Type *IntPtr = TD.getIntPtrType(Context);
- Value *Ops[5] = {
- Builder.CreateBitCast(DestPtr, SBP),
- Builder.CreateIntCast(SrcVal, Type::getInt8Ty(Context), /*isSigned*/true),
- Builder.CreateIntCast(Size, IntPtr, /*isSigned*/true),
- ConstantInt::get(Type::getInt32Ty(Context), Align),
- ConstantInt::get(Type::getInt1Ty(Context), false)
- };
- const Type *ArgTypes[2] = { SBP, IntPtr };
-
- Builder.CreateCall(Intrinsic::getDeclaration(TheModule, Intrinsic::memset,
- ArgTypes, 2), Ops, Ops+5);
- return Ops[0];
-}
-
-
-// Emits code to do something for a type attribute
-void TreeToLLVM::EmitTypeGcroot(Value *V) {
- // GC intrinsics can only be used in functions which specify a collector.
- Fn->setGC("shadow-stack");
-
- Function *gcrootFun = Intrinsic::getDeclaration(TheModule,
- Intrinsic::gcroot);
-
- // The idea is that it's a pointer to type "Value"
- // which is opaque* but the routine expects i8** and i8*.
- const PointerType *Ty = Type::getInt8PtrTy(Context);
- V = Builder.CreateBitCast(V, Ty->getPointerTo());
-
- Value *Ops[2] = {
- V,
- ConstantPointerNull::get(Ty)
- };
-
- Builder.CreateCall(gcrootFun, Ops, Ops+2);
-}
-
-// Emits annotate intrinsic if the decl has the annotate attribute set.
-void TreeToLLVM::EmitAnnotateIntrinsic(Value *V, tree decl) {
-
- // Handle annotate attribute on global.
- tree annotateAttr = lookup_attribute("annotate", DECL_ATTRIBUTES (decl));
-
- if (!annotateAttr)
- return;
-
- Function *annotateFun = Intrinsic::getDeclaration(TheModule,
- Intrinsic::var_annotation);
-
- // Get file and line number
- Constant *lineNo =
- ConstantInt::get(Type::getInt32Ty(Context), DECL_SOURCE_LINE(decl));
- Constant *file = ConvertMetadataStringToGV(DECL_SOURCE_FILE(decl));
- const Type *SBP = Type::getInt8PtrTy(Context);
- file = TheFolder->CreateBitCast(file, SBP);
-
- // There may be multiple annotate attributes. Pass return of lookup_attr
- // to successive lookups.
- while (annotateAttr) {
-
- // Each annotate attribute is a tree list.
- // Get value of list which is our linked list of args.
- tree args = TREE_VALUE(annotateAttr);
-
- // Each annotate attribute may have multiple args.
- // Treat each arg as if it were a separate annotate attribute.
- for (tree a = args; a; a = TREE_CHAIN(a)) {
- // Each element of the arg list is a tree list, so get value
- tree val = TREE_VALUE(a);
-
- // Assert its a string, and then get that string.
- assert(TREE_CODE(val) == STRING_CST &&
- "Annotate attribute arg should always be a string");
- const Type *SBP = Type::getInt8PtrTy(Context);
- Constant *strGV = EmitAddressOf(val);
- Value *Ops[4] = {
- Builder.CreateBitCast(V, SBP),
- Builder.CreateBitCast(strGV, SBP),
- file,
- lineNo
- };
-
- Builder.CreateCall(annotateFun, Ops, Ops+4);
- }
-
- // Get next annotate attribute.
- annotateAttr = TREE_CHAIN(annotateAttr);
- if (annotateAttr)
- annotateAttr = lookup_attribute("annotate", annotateAttr);
- }
-}
-
-//===----------------------------------------------------------------------===//
-// ... Basic Lists and Binding Scopes ...
-//===----------------------------------------------------------------------===//
-
-/// EmitAutomaticVariableDecl - Emit the function-local decl to the current
-/// function and set DECL_LOCAL for the decl to the right pointer.
-void TreeToLLVM::EmitAutomaticVariableDecl(tree decl) {
- // If this is just the rotten husk of a variable that the gimplifier
- // eliminated all uses of, but is preserving for debug info, ignore it.
- if (TREE_CODE(decl) == VAR_DECL && DECL_VALUE_EXPR(decl))
- return;
-
- tree type = TREE_TYPE(decl);
- const Type *Ty; // Type to allocate
- Value *Size = 0; // Amount to alloca (null for 1)
-
- if (DECL_SIZE(decl) == 0) { // Variable with incomplete type.
- if (DECL_INITIAL(decl) == 0)
- return; // Error message was already done; now avoid a crash.
- else {
- // "An initializer is going to decide the size of this array."??
- TODO(decl);
- abort();
- }
- } else if (TREE_CODE(DECL_SIZE_UNIT(decl)) == INTEGER_CST) {
- // Variable of fixed size that goes on the stack.
- Ty = ConvertType(type);
- } else {
- // Compute the variable's size in bytes.
- Size = EmitRegister(DECL_SIZE_UNIT(decl));
- Ty = Type::getInt8Ty(Context);
- }
-
- unsigned Alignment = 0; // Alignment in bytes.
-
- // Set the alignment for the local if one of the following condition is met
- // 1) DECL_ALIGN is better than the alignment as per ABI specification
- // 2) DECL_ALIGN is set by user.
- if (DECL_ALIGN(decl)) {
- unsigned TargetAlign = getTargetData().getABITypeAlignment(Ty);
- if (DECL_USER_ALIGN(decl) || 8 * TargetAlign < (unsigned)DECL_ALIGN(decl))
- Alignment = DECL_ALIGN(decl) / 8;
- }
-
- // Insert an alloca for this variable.
- AllocaInst *AI;
- if (!Size) { // Fixed size alloca -> entry block.
- AI = CreateTemporary(Ty);
- } else {
- AI = Builder.CreateAlloca(Ty, Size);
- }
- NameValue(AI, decl);
-
- AI->setAlignment(Alignment);
-
- SET_DECL_LOCAL(decl, AI);
-
- // Handle annotate attributes
- if (DECL_ATTRIBUTES(decl))
- EmitAnnotateIntrinsic(AI, decl);
-
- // Handle gcroot attribute
- if (POINTER_TYPE_P(TREE_TYPE (decl))
- && lookup_attribute("gcroot", TYPE_ATTRIBUTES(TREE_TYPE (decl))))
- {
- // We should null out local variables so that a stack crawl
- // before initialization doesn't get garbage results to follow.
- const Type *T = cast<PointerType>(AI->getType())->getElementType();
- EmitTypeGcroot(AI);
- Builder.CreateStore(Constant::getNullValue(T), AI);
- }
-
- if (EmitDebugInfo()) {
- if (DECL_NAME(decl)) {
- TheDebugInfo->EmitDeclare(decl, dwarf::DW_TAG_auto_variable,
- AI->getNameStr().c_str(), TREE_TYPE(decl), AI,
- Builder);
- } else if (TREE_CODE(decl) == RESULT_DECL) {
- TheDebugInfo->EmitDeclare(decl, dwarf::DW_TAG_return_variable,
- AI->getNameStr().c_str(), TREE_TYPE(decl), AI,
- Builder);
- }
- }
-}
-
-
-//===----------------------------------------------------------------------===//
-// ... Control Flow ...
-//===----------------------------------------------------------------------===//
-
-/// ConvertTypeInfo - Convert an exception handling type info into a pointer to
-/// the associated runtime type info object.
-static Constant *ConvertTypeInfo(tree type) {
- // TODO: Once pass_ipa_free_lang is made a default pass, remove the call to
- // lookup_type_for_runtime below.
- if (TYPE_P (type))
- type = lookup_type_for_runtime (type);
- STRIP_NOPS(type);
- if (TREE_CODE(type) == ADDR_EXPR)
- type = TREE_OPERAND(type, 0);
- return EmitAddressOf(type);
-}
-
-/// getExceptionPtr - Return the local holding the exception pointer for the
-/// given exception handling region, creating it if necessary.
-AllocaInst *TreeToLLVM::getExceptionPtr(unsigned RegionNo) {
- if (RegionNo >= ExceptionPtrs.size())
- ExceptionPtrs.resize(RegionNo + 1, 0);
-
- AllocaInst *&ExceptionPtr = ExceptionPtrs[RegionNo];
-
- if (!ExceptionPtr) {
- ExceptionPtr = CreateTemporary(Type::getInt8PtrTy(Context));
- ExceptionPtr->setName("exc_tmp");
- }
-
- return ExceptionPtr;
-}
-
-/// getExceptionFilter - Return the local holding the filter value for the
-/// given exception handling region, creating it if necessary.
-AllocaInst *TreeToLLVM::getExceptionFilter(unsigned RegionNo) {
- if (RegionNo >= ExceptionFilters.size())
- ExceptionFilters.resize(RegionNo + 1, 0);
-
- AllocaInst *&ExceptionFilter = ExceptionFilters[RegionNo];
-
- if (!ExceptionFilter) {
- ExceptionFilter = CreateTemporary(Type::getInt32Ty(Context));
- ExceptionFilter->setName("filt_tmp");
- }
-
- return ExceptionFilter;
-}
-
-/// getFailureBlock - Return the basic block containing the failure code for
-/// the given exception handling region, creating it if necessary.
-BasicBlock *TreeToLLVM::getFailureBlock(unsigned RegionNo) {
- if (RegionNo >= FailureBlocks.size())
- FailureBlocks.resize(RegionNo + 1, 0);
-
- BasicBlock *&FailureBlock = FailureBlocks[RegionNo];
-
- if (!FailureBlock)
- FailureBlock = BasicBlock::Create(Context, "fail");
-
- return FailureBlock;
-}
-
-/// EmitLandingPads - Emit EH landing pads.
-void TreeToLLVM::EmitLandingPads() {
- // If there are no invokes then there is nothing to do.
- if (NormalInvokes.empty())
- return;
-
- // If a GCC post landing pad is shared by several exception handling regions,
- // or if there is a normal edge to it, then create LLVM landing pads for each
- // eh region. Calls to eh.exception and eh.selector will then go in the LLVM
- // landing pad, which branches to the GCC post landing pad.
- for (unsigned LPadNo = 1; LPadNo < NormalInvokes.size(); ++LPadNo) {
- // Get the list of invokes for this GCC landing pad.
- SmallVector<InvokeInst *, 8> &InvokesForPad = NormalInvokes[LPadNo];
-
- if (InvokesForPad.empty())
- continue;
-
- // All of the invokes unwind to the GCC post landing pad.
- BasicBlock *PostPad = InvokesForPad[0]->getUnwindDest();
-
- // If the number of invokes is equal to the number of predecessors of the
- // post landing pad then it follows that no other GCC landing pad has any
- // invokes that unwind to this post landing pad, and also that no normal
- // edges land at this post pad. In this case there is no need to create
- // an LLVM specific landing pad.
- if ((unsigned)std::distance(pred_begin(PostPad), pred_end(PostPad)) ==
- InvokesForPad.size())
- continue;
-
- // Create the LLVM landing pad right before the GCC post landing pad.
- BasicBlock *LPad = BasicBlock::Create(Context, "lpad", Fn, PostPad);
-
- // Redirect invoke unwind edges from the GCC post landing pad to LPad.
- for (unsigned i = 0, e = InvokesForPad.size(); i < e; ++i)
- InvokesForPad[i]->setSuccessor(1, LPad);
-
- // If there are any PHI nodes in PostPad, we need to update them to merge
- // incoming values from LPad instead.
- pred_iterator PB = pred_begin(LPad), PE = pred_end(LPad);
- for (BasicBlock::iterator II = PostPad->begin(); isa<PHINode>(II);) {
- PHINode *PN = cast<PHINode>(II++);
-
- // Check to see if all of the values coming in via invoke unwind edges are
- // the same. If so, we don't need to create a new PHI node.
- Value *InVal = PN->getIncomingValueForBlock(*PB);
- for (pred_iterator PI = PB; PI != PE; ++PI) {
- if (PI != PB && InVal != PN->getIncomingValueForBlock(*PI)) {
- InVal = 0;
- break;
- }
- }
-
- if (InVal == 0) {
- // Different unwind edges have different values. Create a new PHI node
- // in LPad.
- PHINode *NewPN = PHINode::Create(PN->getType(), PN->getName()+".lpad",
- LPad);
- // Add an entry for each unwind edge, using the value from the old PHI.
- for (pred_iterator PI = PB; PI != PE; ++PI)
- NewPN->addIncoming(PN->getIncomingValueForBlock(*PI), *PI);
-
- // Now use this new PHI as the common incoming value for LPad in PN.
- InVal = NewPN;
- }
-
- // Revector exactly one entry in the PHI node to come from LPad and
- // delete the entries that came from the invoke unwind edges.
- for (pred_iterator PI = PB; PI != PE; ++PI)
- PN->removeIncomingValue(*PI);
- PN->addIncoming(InVal, LPad);
- }
-
- // Add a fallthrough from LPad to the original landing pad.
- BranchInst::Create(PostPad, LPad);
- }
-
- // Initialize the exception pointer and selector value for each exception
- // handling region at the start of the corresponding landing pad. At this
- // point each exception handling region has its own landing pad, which is
- // only reachable via the unwind edges of the region's invokes.
- std::vector<Value*> Args;
- Function *ExcIntr = Intrinsic::getDeclaration(TheModule,
- Intrinsic::eh_exception);
- Function *SlctrIntr = Intrinsic::getDeclaration(TheModule,
- Intrinsic::eh_selector);
- for (unsigned LPadNo = 1; LPadNo < NormalInvokes.size(); ++LPadNo) {
- // Get the list of invokes for this GCC landing pad.
- SmallVector<InvokeInst *, 8> &InvokesForPad = NormalInvokes[LPadNo];
-
- if (InvokesForPad.empty())
- continue;
-
- // All of the invokes unwind to the the landing pad.
- BasicBlock *LPad = InvokesForPad[0]->getUnwindDest();
-
- // The exception handling region this landing pad is for.
- eh_region region = get_eh_region_from_lp_number(LPadNo);
- assert(region->index > 0 && "Invalid landing pad region!");
- unsigned RegionNo = region->index;
-
- // Insert instructions at the start of the landing pad, but after any phis.
- Builder.SetInsertPoint(LPad, LPad->getFirstNonPHI());
-
- // Fetch the exception pointer.
- Value *ExcPtr = Builder.CreateCall(ExcIntr, "exc_ptr");
-
- // Store it if made use of elsewhere.
- if (RegionNo < ExceptionPtrs.size() && ExceptionPtrs[RegionNo])
- Builder.CreateStore(ExcPtr, ExceptionPtrs[RegionNo]);
-
- // Get the exception selector. The first argument is the exception pointer.
- Args.push_back(ExcPtr);
-
- // It is followed by the personality function.
- tree personality = DECL_FUNCTION_PERSONALITY(FnDecl);
- if (!personality) {
- assert(function_needs_eh_personality(cfun) == eh_personality_any &&
- "No exception handling personality!");
- personality = lang_hooks.eh_personality();
- }
- Args.push_back(Builder.CreateBitCast(DECL_LLVM(personality),
- Type::getInt8PtrTy(Context)));
-
- Constant *CatchAll = TheModule->getGlobalVariable("llvm.eh.catch.all.value");
- if (!CatchAll) {
- // The representation of a catch-all is language specific.
- // TODO: Remove this hack.
- Constant *Init = 0;
- StringRef LanguageName = lang_hooks.name;
- if (LanguageName == "GNU Ada") {
- StringRef Name = "__gnat_all_others_value";
- Init = TheModule->getGlobalVariable(Name);
- if (!Init)
- Init = new GlobalVariable(*TheModule, ConvertType(integer_type_node),
- /*isConstant*/true,
- GlobalValue::ExternalLinkage,
- /*Initializer*/NULL, Name);
- } else {
- // Other languages use a null pointer.
- Init = Constant::getNullValue(Type::getInt8PtrTy(Context));
- }
- CatchAll = new GlobalVariable(*TheModule, Init->getType(), true,
- GlobalVariable::LinkOnceAnyLinkage,
- Init, "llvm.eh.catch.all.value");
- cast<GlobalVariable>(CatchAll)->setSection("llvm.metadata");
- AttributeUsedGlobals.insert(CatchAll);
- }
-
- bool AllCaught = false; // Did we saw a catch-all or no-throw?
- bool HasCleanup = false; // Did we see a cleanup?
- SmallSet<Constant *, 8> AlreadyCaught; // Typeinfos known caught already.
- for (; region && !AllCaught; region = region->outer)
- switch (region->type) {
- case ERT_ALLOWED_EXCEPTIONS: {
- // Filter.
-
- // Push a fake placeholder value for the length. The real length is
- // computed below, once we know which typeinfos we are going to use.
- unsigned LengthIndex = Args.size();
- Args.push_back(NULL); // Fake length value.
-
- // Add the type infos.
- AllCaught = true;
- for (tree type = region->u.allowed.type_list; type;
- type = TREE_CHAIN(type)) {
- Constant *TypeInfo = ConvertTypeInfo(TREE_VALUE(type));
- // No point in permitting a typeinfo to be thrown if we know it can
- // never reach the filter.
- if (AlreadyCaught.count(TypeInfo))
- continue;
- Args.push_back(TypeInfo);
- AllCaught = false;
- }
-
- // The length is one more than the number of typeinfos.
- Args[LengthIndex] = ConstantInt::get(Type::getInt32Ty(Context),
- Args.size() - LengthIndex);
- break;
- }
- case ERT_CLEANUP:
- HasCleanup = true;
- break;
- case ERT_MUST_NOT_THROW:
- // Same as a zero-length filter.
- AllCaught = true;
- Args.push_back(ConstantInt::get(Type::getInt32Ty(Context), 1));
- break;
- case ERT_TRY:
- // Catches.
- for (eh_catch c = region->u.eh_try.first_catch; c ; c = c->next_catch)
- if (!c->type_list) {
- // Catch-all - push a null pointer.
- AllCaught = true;
- Args.push_back(Constant::getNullValue(Type::getInt8PtrTy(Context)));
- } else {
- // Add the type infos.
- for (tree type = c->type_list; type; type = TREE_CHAIN(type)) {
- Constant *TypeInfo = ConvertTypeInfo(TREE_VALUE(type));
- // No point in trying to catch a typeinfo that was already caught.
- if (!AlreadyCaught.insert(TypeInfo))
- continue;
- Args.push_back(TypeInfo);
- AllCaught = TypeInfo == CatchAll;
- if (AllCaught)
- break;
- }
- }
- break;
- }
-
- if (HasCleanup) {
- if (Args.size() == 2)
- // Insert a sentinel indicating that this is a cleanup-only selector.
- Args.push_back(ConstantInt::get(Type::getInt32Ty(Context), 0));
- else if (!AllCaught)
- // Some exceptions from this region may not be caught by any handler.
- // Since invokes are required to branch to the unwind label no matter
- // what exception is being unwound, append a catch-all. I have a plan
- // that will make all such horrible hacks unnecessary, but unfortunately
- // this comment is too short to explain it.
- Args.push_back(CatchAll);
- }
-
- // Emit the selector call.
- Value *Filter = Builder.CreateCall(SlctrIntr, Args.begin(), Args.end(),
- "filter");
-
- // Store it if made use of elsewhere.
- if (RegionNo < ExceptionFilters.size() && ExceptionFilters[RegionNo])
- Builder.CreateStore(Filter, ExceptionFilters[RegionNo]);
-
- Args.clear();
- }
-
- NormalInvokes.clear();
-}
-
-/// EmitFailureBlocks - Emit the blocks containing failure code executed when
-/// an exception is thrown in a must-not-throw region.
-void TreeToLLVM::EmitFailureBlocks() {
- for (unsigned RegionNo = 1; RegionNo < FailureBlocks.size(); ++RegionNo) {
- BasicBlock *FailureBlock = FailureBlocks[RegionNo];
-
- if (!FailureBlock)
- continue;
-
- eh_region region = get_eh_region_from_number(RegionNo);
- assert(region->type == ERT_MUST_NOT_THROW && "Unexpected region type!");
-
- // Check whether all predecessors are invokes or not. Nothing exotic can
- // occur here, only direct branches and unwinding via an invoke.
- bool hasBranchPred = false;
- bool hasInvokePred = false;
- for (pred_iterator I = pred_begin(FailureBlock), E = pred_end(FailureBlock);
- I != E && (!hasInvokePred || !hasBranchPred); ++I) {
- TerminatorInst *T = (*I)->getTerminator();
- if (isa<InvokeInst>(T)) {
- assert(FailureBlock != T->getSuccessor(0) && "Expected unwind target!");
- hasInvokePred = true;
- } else {
- assert(isa<BranchInst>(T) && "Wrong kind of failure predecessor!");
- hasBranchPred = true;
- }
- }
- assert((hasBranchPred || hasInvokePred) && "No predecessors!");
-
- // Determine the landing pad that invokes will unwind to. If there are no
- // invokes, then there is no landing pad.
- BasicBlock *LandingPad = NULL;
- if (hasInvokePred) {
- // If all predecessors are invokes, then the failure block can be used as
- // the landing pad. Otherwise, create a landing pad.
- if (hasBranchPred)
- LandingPad = BasicBlock::Create(Context, "pad");
- else
- LandingPad = FailureBlock;
- }
-
- if (LandingPad) {
- BeginBlock(LandingPad);
-
- // Generate an empty (i.e. catch-all) filter in the landing pad.
- Function *ExcIntr = Intrinsic::getDeclaration(TheModule,
- Intrinsic::eh_exception);
- Function *SlctrIntr = Intrinsic::getDeclaration(TheModule,
- Intrinsic::eh_selector);
- Value *Args[3];
- // The exception pointer.
- Args[0] = Builder.CreateCall(ExcIntr, "exc_ptr");
- // The personality function.
- tree personality = DECL_FUNCTION_PERSONALITY(FnDecl);
- assert(personality && "No-throw region but no personality function!");
- Args[1] = Builder.CreateBitCast(DECL_LLVM(personality),
- Type::getInt8PtrTy(Context));
- // One more than the filter length.
- Args[2] = ConstantInt::get(Type::getInt32Ty(Context), 1);
- // Create the selector call.
- Builder.CreateCall(SlctrIntr, Args, Args + 3, "filter");
-
- if (LandingPad != FailureBlock) {
- // Make sure all invokes unwind to the new landing pad.
- for (pred_iterator I = pred_begin(FailureBlock),
- E = pred_end(FailureBlock); I != E; ) {
- TerminatorInst *T = (*I++)->getTerminator();
- if (isa<InvokeInst>(T))
- T->setSuccessor(1, LandingPad);
- }
-
- // Branch to the failure block at the end of the landing pad.
- Builder.CreateBr(FailureBlock);
- }
- }
-
- if (LandingPad != FailureBlock)
- BeginBlock(FailureBlock);
-
- // Determine the failure function to call.
- Value *FailFunc = DECL_LLVM(region->u.must_not_throw.failure_decl);
-
- // Make sure it has the right type.
- FunctionType *FTy = FunctionType::get(Type::getVoidTy(Context), false);
- FailFunc = Builder.CreateBitCast(FailFunc, FTy->getPointerTo());
-
- // Spank the user for being naughty.
- // TODO: Set the correct debug location.
- CallInst *FailCall = Builder.CreateCall(FailFunc);
-
- // This is always fatal.
- FailCall->setDoesNotReturn();
- FailCall->setDoesNotThrow();
- Builder.CreateUnreachable();
- }
-}
-
-/// EmitRewindBlock - Emit the block containing code to continue unwinding an
-/// exception.
-void TreeToLLVM::EmitRewindBlock() {
- if (!RewindBB)
- return;
-
- BeginBlock (RewindBB);
-
- // The exception pointer to continue unwinding.
- assert(RewindTmp && "Rewind block but nothing to unwind?");
- Value *ExcPtr = Builder.CreateLoad(RewindTmp);
-
- // Generate an explicit call to _Unwind_Resume_or_Rethrow.
- // FIXME: On ARM this should be a call to __cxa_end_cleanup with no arguments.
- std::vector<const Type*> Params(1, Type::getInt8PtrTy(Context));
- FunctionType *FTy = FunctionType::get(Type::getVoidTy(Context), Params,
- false);
- Constant *RewindFn =
- TheModule->getOrInsertFunction("_Unwind_Resume_or_Rethrow", FTy);
-
- // Pass it to _Unwind_Resume_or_Rethrow.
- CallInst *Rewind = Builder.CreateCall(RewindFn, ExcPtr);
-
- // This call does not return.
- Rewind->setDoesNotReturn();
- Builder.CreateUnreachable();
-}
-
-
-//===----------------------------------------------------------------------===//
-// ... Expressions ...
-//===----------------------------------------------------------------------===//
-
-static bool canEmitRegisterVariable(tree exp) {
- // Only variables can be marked as 'register'.
- if (TREE_CODE(exp) != VAR_DECL || !DECL_REGISTER(exp))
- return false;
-
- // We can emit inline assembler for access to global register variables.
- if (TREE_STATIC(exp) || DECL_EXTERNAL(exp) || TREE_PUBLIC(exp))
- return true;
-
- // Emit inline asm if this is local variable with assembler name on it.
- if (DECL_ASSEMBLER_NAME_SET_P(exp))
- return true;
-
- // Otherwise - it's normal automatic variable.
- return false;
-}
-
-/// EmitLoadOfLValue - When an l-value expression is used in a context that
-/// requires an r-value, this method emits the lvalue computation, then loads
-/// the result.
-Value *TreeToLLVM::EmitLoadOfLValue(tree exp) {
- if (canEmitRegisterVariable(exp))
- // If this is a register variable, EmitLV can't handle it (there is no
- // l-value of a register variable). Emit an inline asm node that copies the
- // value out of the specified register.
- return EmitReadOfRegisterVariable(exp);
-
- LValue LV = EmitLV(exp);
- LV.Volatile = TREE_THIS_VOLATILE(exp);
- // TODO: Arrange for Volatile to already be set in the LValue.
- const Type *Ty = ConvertType(TREE_TYPE(exp));
- unsigned Alignment = LV.getAlignment();
-
- if (!LV.isBitfield()) {
- // Scalar value: emit a load.
- return LoadRegisterFromMemory(LV, TREE_TYPE(exp), Builder);
- } else {
- // This is a bitfield reference.
- if (!LV.BitSize)
- return Constant::getNullValue(Ty);
-
- const Type *ValTy = cast<PointerType>(LV.Ptr->getType())->getElementType();
- unsigned ValSizeInBits = ValTy->getPrimitiveSizeInBits();
-
- // The number of loads needed to read the entire bitfield.
- unsigned Strides = 1 + (LV.BitStart + LV.BitSize - 1) / ValSizeInBits;
-
- assert(ValTy->isIntegerTy() && "Invalid bitfield lvalue!");
- assert(ValSizeInBits > LV.BitStart && "Bad bitfield lvalue!");
- assert(ValSizeInBits >= LV.BitSize && "Bad bitfield lvalue!");
- assert(2*ValSizeInBits > LV.BitSize+LV.BitStart && "Bad bitfield lvalue!");
-
- Value *Result = NULL;
-
- for (unsigned I = 0; I < Strides; I++) {
- unsigned Index = BYTES_BIG_ENDIAN ? I : Strides - I - 1; // MSB first
- unsigned ThisFirstBit = Index * ValSizeInBits;
- unsigned ThisLastBitPlusOne = ThisFirstBit + ValSizeInBits;
- if (ThisFirstBit < LV.BitStart)
- ThisFirstBit = LV.BitStart;
- if (ThisLastBitPlusOne > LV.BitStart+LV.BitSize)
- ThisLastBitPlusOne = LV.BitStart+LV.BitSize;
-
- Value *Ptr = Index ?
- Builder.CreateGEP(LV.Ptr,
- ConstantInt::get(Type::getInt32Ty(Context), Index)) :
- LV.Ptr;
- LoadInst *LI = Builder.CreateLoad(Ptr, LV.Volatile);
- LI->setAlignment(Alignment);
- Value *Val = LI;
-
- unsigned BitsInVal = ThisLastBitPlusOne - ThisFirstBit;
- unsigned FirstBitInVal = ThisFirstBit % ValSizeInBits;
-
- if (BYTES_BIG_ENDIAN)
- FirstBitInVal = ValSizeInBits-FirstBitInVal-BitsInVal;
-
- // Mask the bits out by shifting left first, then shifting right. The
- // LLVM optimizer will turn this into an AND if this is an unsigned
- // expression.
-
- if (FirstBitInVal+BitsInVal != ValSizeInBits) {
- Value *ShAmt = ConstantInt::get(ValTy, ValSizeInBits -
- (FirstBitInVal+BitsInVal));
- Val = Builder.CreateShl(Val, ShAmt);
- }
-
- // Shift right required?
- if (ValSizeInBits != BitsInVal) {
- bool AddSignBits = !TYPE_UNSIGNED(TREE_TYPE(exp)) && !Result;
- Value *ShAmt = ConstantInt::get(ValTy, ValSizeInBits-BitsInVal);
- Val = AddSignBits ?
- Builder.CreateAShr(Val, ShAmt) : Builder.CreateLShr(Val, ShAmt);
- }
-
- if (Result) {
- Value *ShAmt = ConstantInt::get(ValTy, BitsInVal);
- Result = Builder.CreateShl(Result, ShAmt);
- Result = Builder.CreateOr(Result, Val);
- } else {
- Result = Val;
- }
- }
-
- return Builder.CreateIntCast(Result, GetRegType(TREE_TYPE(exp)),
- /*isSigned*/!TYPE_UNSIGNED(TREE_TYPE(exp)));
- }
-}
-
-Value *TreeToLLVM::EmitADDR_EXPR(tree exp) {
- LValue LV = EmitLV(TREE_OPERAND(exp, 0));
- assert((!LV.isBitfield() || LV.BitStart == 0) &&
- "It is illegal to take the address of a bitfield!");
- // Perform a cast here if necessary. For example, GCC sometimes forms an
- // ADDR_EXPR where the operand is an array, and the ADDR_EXPR type is a
- // pointer to the first element.
- return Builder.CreateBitCast(LV.Ptr, ConvertType(TREE_TYPE(exp)));
-}
-
-Value *TreeToLLVM::EmitOBJ_TYPE_REF(tree exp) {
- return Builder.CreateBitCast(EmitRegister(OBJ_TYPE_REF_EXPR(exp)),
- ConvertType(TREE_TYPE(exp)));
-}
-
-/// EmitCONSTRUCTOR - emit the constructor into the location specified by
-/// DestLoc.
-Value *TreeToLLVM::EmitCONSTRUCTOR(tree exp, const MemRef *DestLoc) {
- tree type = TREE_TYPE(exp);
- const Type *Ty = ConvertType(type);
- if (const VectorType *VTy = dyn_cast<VectorType>(Ty)) {
- assert(DestLoc == 0 && "Dest location for vector value?");
- std::vector<Value *> BuildVecOps;
- BuildVecOps.reserve(VTy->getNumElements());
-
- // Insert all of the elements here.
- unsigned HOST_WIDE_INT idx;
- tree value;
- FOR_EACH_CONSTRUCTOR_VALUE (CONSTRUCTOR_ELTS (exp), idx, value) {
- Value *Elt = EmitMemory(value);
-
- if (const VectorType *EltTy = dyn_cast<VectorType>(Elt->getType())) {
- // GCC allows vectors to be built up from vectors. Extract all of the
- // vector elements and add them to the list of build vector operands.
- for (unsigned i = 0, e = EltTy->getNumElements(); i != e; ++i) {
- Value *Index = ConstantInt::get(llvm::Type::getInt32Ty(Context), i);
- BuildVecOps.push_back(Builder.CreateExtractElement(Elt, Index));
- }
- } else {
- assert(Elt->getType() == VTy->getElementType() &&
- "Unexpected type for vector constructor!");
- BuildVecOps.push_back(Elt);
- }
- }
-
- // Insert zero for any unspecified values.
- while (BuildVecOps.size() < VTy->getNumElements())
- BuildVecOps.push_back(Constant::getNullValue(VTy->getElementType()));
- assert(BuildVecOps.size() == VTy->getNumElements() &&
- "Vector constructor specified too many values!");
-
- return BuildVector(BuildVecOps);
- }
-
- assert(AGGREGATE_TYPE_P(type) && "Constructor for scalar type??");
-
- // Start out with the value zero'd out.
- EmitAggregateZero(*DestLoc, type);
-
- VEC(constructor_elt, gc) *elt = CONSTRUCTOR_ELTS(exp);
- switch (TREE_CODE(TREE_TYPE(exp))) {
- case ARRAY_TYPE:
- case RECORD_TYPE:
- default:
- if (elt && VEC_length(constructor_elt, elt)) {
- // We don't handle elements yet.
-
- TODO(exp);
- }
- return 0;
- case QUAL_UNION_TYPE:
- case UNION_TYPE:
- // Store each element of the constructor into the corresponding field of
- // DEST.
- if (!elt || VEC_empty(constructor_elt, elt)) return 0; // no elements
- assert(VEC_length(constructor_elt, elt) == 1
- && "Union CONSTRUCTOR should have one element!");
- tree tree_purpose = VEC_index(constructor_elt, elt, 0)->index;
- tree tree_value = VEC_index(constructor_elt, elt, 0)->value;
- if (!tree_purpose)
- return 0; // Not actually initialized?
-
- if (AGGREGATE_TYPE_P(TREE_TYPE(tree_purpose))) {
- EmitAggregate(tree_value, *DestLoc);
- } else {
- // Scalar value. Evaluate to a register, then do the store.
- Value *V = EmitRegister(tree_value);
- StoreRegisterToMemory(V, *DestLoc, TREE_TYPE(tree_purpose), Builder);
- }
- break;
- }
- return 0;
-}
-
-/// llvm_load_scalar_argument - Load value located at LOC.
-static Value *llvm_load_scalar_argument(Value *L,
- const llvm::Type *LLVMTy,
- unsigned RealSize,
- LLVMBuilder &Builder) {
- if (!RealSize)
- return UndefValue::get(LLVMTy);
-
- // Not clear what this is supposed to do on big endian machines...
- assert(!BYTES_BIG_ENDIAN && "Unsupported case - please report");
- assert(LLVMTy->isIntegerTy() && "Expected an integer value!");
- const Type *LoadType = IntegerType::get(Context, RealSize * 8);
- L = Builder.CreateBitCast(L, LoadType->getPointerTo());
- Value *Val = Builder.CreateLoad(L);
- if (LoadType->getPrimitiveSizeInBits() >= LLVMTy->getPrimitiveSizeInBits())
- Val = Builder.CreateTrunc(Val, LLVMTy);
- else
- Val = Builder.CreateZExt(Val, LLVMTy);
- return Val;
-}
-
-#ifndef LLVM_LOAD_SCALAR_ARGUMENT
-#define LLVM_LOAD_SCALAR_ARGUMENT(LOC,TY,SIZE,BUILDER) \
- llvm_load_scalar_argument((LOC),(TY),(SIZE),(BUILDER))
-#endif
-
-namespace {
- /// FunctionCallArgumentConversion - This helper class is driven by the ABI
- /// definition for this target to figure out how to pass arguments into the
- /// stack/regs for a function call.
- struct FunctionCallArgumentConversion : public DefaultABIClient {
- SmallVector<Value*, 16> &CallOperands;
- SmallVector<Value*, 2> LocStack;
- const FunctionType *FTy;
- const MemRef *DestLoc;
- bool useReturnSlot;
- LLVMBuilder &Builder;
- Value *TheValue;
- MemRef RetBuf;
- CallingConv::ID &CallingConv;
- bool isShadowRet;
- bool isAggrRet;
- unsigned Offset;
-
- FunctionCallArgumentConversion(SmallVector<Value*, 16> &ops,
- const FunctionType *FnTy,
- const MemRef *destloc,
- bool ReturnSlotOpt,
- LLVMBuilder &b,
- CallingConv::ID &CC)
- : CallOperands(ops), FTy(FnTy), DestLoc(destloc),
- useReturnSlot(ReturnSlotOpt), Builder(b), CallingConv(CC),
- isShadowRet(false), isAggrRet(false), Offset(0) { }
-
- /// getCallingConv - This provides the desired CallingConv for the function.
- CallingConv::ID& getCallingConv(void) { return CallingConv; }
-
- // Push the address of an argument.
- void pushAddress(Value *Loc) {
- assert(Loc && "Invalid location!");
- LocStack.push_back(Loc);
- }
-
- // Push the value of an argument.
- void pushValue(Value *V) {
- assert(LocStack.empty() && "Value only allowed at top level!");
- LocStack.push_back(NULL);
- TheValue = V;
- }
-
- // Get the address of the current location.
- Value *getAddress(void) {
- assert(!LocStack.empty());
- Value *&Loc = LocStack.back();
- if (!Loc) {
- // A value. Store to a temporary, and return the temporary's address.
- // Any future access to this argument will reuse the same address.
- Loc = TheTreeToLLVM->CreateTemporary(TheValue->getType());
- Builder.CreateStore(TheValue, Loc);
- }
- return Loc;
- }
-
- // Get the value of the current location (of type Ty).
- Value *getValue(const Type *Ty) {
- assert(!LocStack.empty());
- Value *Loc = LocStack.back();
- if (Loc) {
- // An address. Convert to the right type and load the value out.
- Loc = Builder.CreateBitCast(Loc, Ty->getPointerTo());
- return Builder.CreateLoad(Loc, "val");
- } else {
- // A value - just return it.
- assert(TheValue->getType() == Ty && "Value not of expected type!");
- return TheValue;
- }
- }
-
- void clear() {
- assert(LocStack.size() == 1 && "Imbalance!");
- LocStack.clear();
- }
-
- bool isShadowReturn() const { return isShadowRet; }
- bool isAggrReturn() { return isAggrRet; }
-
- // EmitShadowResult - If the return result was redirected to a buffer,
- // emit it now.
- Value *EmitShadowResult(tree type, const MemRef *DestLoc) {
- if (!RetBuf.Ptr)
- return 0;
-
- if (DestLoc) {
- // Copy out the aggregate return value now.
- assert(ConvertType(type) ==
- cast<PointerType>(RetBuf.Ptr->getType())->getElementType() &&
- "Inconsistent result types!");
- TheTreeToLLVM->EmitAggregateCopy(*DestLoc, RetBuf, type);
- return 0;
- } else {
- // Read out the scalar return value now.
- return Builder.CreateLoad(RetBuf.Ptr, "result");
- }
- }
-
- /// HandleScalarResult - This callback is invoked if the function returns a
- /// simple scalar result value.
- void HandleScalarResult(const Type * /*RetTy*/) {
- // There is nothing to do here if we return a scalar or void.
- assert(DestLoc == 0 &&
- "Call returns a scalar but caller expects aggregate!");
- }
-
- /// HandleAggregateResultAsScalar - This callback is invoked if the function
- /// returns an aggregate value by bit converting it to the specified scalar
- /// type and returning that.
- void HandleAggregateResultAsScalar(const Type * /*ScalarTy*/,
- unsigned Offset = 0) {
- this->Offset = Offset;
- }
-
- /// HandleAggregateResultAsAggregate - This callback is invoked if the
- /// function returns an aggregate value using multiple return values.
- void HandleAggregateResultAsAggregate(const Type * /*AggrTy*/) {
- // There is nothing to do here.
- isAggrRet = true;
- }
-
- /// HandleAggregateShadowResult - This callback is invoked if the function
- /// returns an aggregate value by using a "shadow" first parameter. If
- /// RetPtr is set to true, the pointer argument itself is returned from the
- /// function.
- void HandleAggregateShadowResult(const PointerType *PtrArgTy, bool /*RetPtr*/) {
- // We need to pass memory to write the return value into.
- // FIXME: alignment and volatility are being ignored!
- assert(!DestLoc || PtrArgTy == DestLoc->Ptr->getType());
-
- if (DestLoc == 0) {
- // The result is unused, but still needs to be stored somewhere.
- Value *Buf = TheTreeToLLVM->CreateTemporary(PtrArgTy->getElementType());
- CallOperands.push_back(Buf);
- } else if (useReturnSlot) {
- // Letting the call write directly to the final destination is safe and
- // may be required. Do not use a buffer.
- CallOperands.push_back(DestLoc->Ptr);
- } else {
- // Letting the call write directly to the final destination may not be
- // safe (eg: if DestLoc aliases a parameter) and is not required - pass
- // a buffer and copy it to DestLoc after the call.
- RetBuf = TheTreeToLLVM->CreateTempLoc(PtrArgTy->getElementType());
- CallOperands.push_back(RetBuf.Ptr);
- }
-
- // Note the use of a shadow argument.
- isShadowRet = true;
- }
-
- void HandlePad(const llvm::Type *LLVMTy) {
- CallOperands.push_back(UndefValue::get(LLVMTy));
- }
-
- /// HandleScalarShadowResult - This callback is invoked if the function
- /// returns a scalar value by using a "shadow" first parameter, which is a
- /// pointer to the scalar, of type PtrArgTy. If RetPtr is set to true,
- /// the pointer argument itself is returned from the function.
- void HandleScalarShadowResult(const PointerType *PtrArgTy,
- bool /*RetPtr*/) {
- assert(DestLoc == 0 &&
- "Call returns a scalar but caller expects aggregate!");
- // Create a buffer to hold the result. The result will be loaded out of
- // it after the call.
- RetBuf = TheTreeToLLVM->CreateTempLoc(PtrArgTy->getElementType());
- CallOperands.push_back(RetBuf.Ptr);
-
- // Note the use of a shadow argument.
- isShadowRet = true;
- }
-
- /// HandleScalarArgument - This is the primary callback that specifies an
- /// LLVM argument to pass. It is only used for first class types.
- void HandleScalarArgument(const llvm::Type *LLVMTy, tree type,
- unsigned RealSize = 0) {
- Value *Loc = NULL;
- if (RealSize) {
- Value *L = getAddress();
- Loc = LLVM_LOAD_SCALAR_ARGUMENT(L,LLVMTy,RealSize,Builder);
- } else
- Loc = getValue(LLVMTy);
-
- // Perform any implicit type conversions.
- if (CallOperands.size() < FTy->getNumParams()) {
- const Type *CalledTy= FTy->getParamType(CallOperands.size());
- if (Loc->getType() != CalledTy) {
- assert(type && "Inconsistent parameter types?");
- bool isSigned = !TYPE_UNSIGNED(type);
- Loc = TheTreeToLLVM->CastToAnyType(Loc, isSigned, CalledTy, false);
- }
- }
-
- CallOperands.push_back(Loc);
- }
-
- /// HandleByInvisibleReferenceArgument - This callback is invoked if a
- /// pointer (of type PtrTy) to the argument is passed rather than the
- /// argument itself.
- void HandleByInvisibleReferenceArgument(const llvm::Type *PtrTy,
- tree /*type*/) {
- Value *Loc = getAddress();
- Loc = Builder.CreateBitCast(Loc, PtrTy);
- CallOperands.push_back(Loc);
- }
-
- /// HandleByValArgument - This callback is invoked if the aggregate function
- /// argument is passed by value. It is lowered to a parameter passed by
- /// reference with an additional parameter attribute "ByVal".
- void HandleByValArgument(const llvm::Type *LLVMTy, tree /*type*/) {
- Value *Loc = getAddress();
- assert(LLVMTy->getPointerTo() == Loc->getType());
- (void)LLVMTy; // Otherwise unused if asserts off - avoid compiler warning.
- CallOperands.push_back(Loc);
- }
-
- /// HandleFCAArgument - This callback is invoked if the aggregate function
- /// argument is passed as a first class aggregate.
- void HandleFCAArgument(const llvm::Type *LLVMTy, tree /*type*/) {
- Value *Loc = getAddress();
- assert(LLVMTy->getPointerTo() == Loc->getType());
- (void)LLVMTy; // Otherwise unused if asserts off - avoid compiler warning.
- CallOperands.push_back(Builder.CreateLoad(Loc));
- }
-
- /// EnterField - Called when we're about the enter the field of a struct
- /// or union. FieldNo is the number of the element we are entering in the
- /// LLVM Struct, StructTy is the LLVM type of the struct we are entering.
- void EnterField(unsigned FieldNo, const llvm::Type *StructTy) {
- Value *Loc = getAddress();
- Loc = Builder.CreateBitCast(Loc, StructTy->getPointerTo());
- pushAddress(Builder.CreateStructGEP(Loc, FieldNo, "elt"));
- }
- void ExitField() {
- assert(!LocStack.empty());
- LocStack.pop_back();
- }
- };
-}
-
-/// EmitCallOf - Emit a call to the specified callee with the operands specified
-/// in the GIMPLE_CALL 'stmt'. If the result of the call is a scalar, return the
-/// result, otherwise store it in DestLoc.
-Value *TreeToLLVM::EmitCallOf(Value *Callee, gimple stmt, const MemRef *DestLoc,
- const AttrListPtr &InPAL) {
- BasicBlock *LandingPad = 0; // Non-zero indicates an invoke.
- int LPadNo = 0;
-
- AttrListPtr PAL = InPAL;
- if (PAL.isEmpty() && isa<Function>(Callee))
- PAL = cast<Function>(Callee)->getAttributes();
-
- // Work out whether to use an invoke or an ordinary call.
- if (!stmt_could_throw_p(stmt))
- // This call does not throw - mark it 'nounwind'.
- PAL = PAL.addAttr(~0, Attribute::NoUnwind);
-
- if (!PAL.paramHasAttr(~0, Attribute::NoUnwind)) {
- // This call may throw. Determine if we need to generate
- // an invoke rather than a simple call.
- LPadNo = lookup_stmt_eh_lp(stmt);
-
- if (LPadNo > 0) {
- // The call is in an exception handling region with a landing pad.
- // Generate an invoke, with the GCC landing pad as the unwind destination.
- // The destination may change to an LLVM only landing pad, which precedes
- // the GCC one, after phi nodes have been populated (doing things this way
- // simplifies the generation of phi nodes).
- eh_landing_pad lp = get_eh_landing_pad_from_number(LPadNo);
- assert(lp && "Post landing pad not found!");
- LandingPad = getLabelDeclBlock(lp->post_landing_pad);
- } else if (LPadNo < 0) {
- eh_region region = get_eh_region_from_lp_number(LPadNo);
- // The call is in a must-not-throw region. Generate an invoke that causes
- // the region's failure code to be run if an exception is thrown.
- assert(region->type == ERT_MUST_NOT_THROW && "Unexpected region type!");
-
- // Unwind to the block containing the failure code.
- LandingPad = getFailureBlock(region->index);
- }
- }
-
- tree fndecl = gimple_call_fndecl(stmt);
- tree fntype = fndecl ?
- TREE_TYPE(fndecl) : TREE_TYPE (TREE_TYPE(gimple_call_fn(stmt)));
-
- // Determine the calling convention.
- CallingConv::ID CallingConvention = CallingConv::C;
-#ifdef TARGET_ADJUST_LLVM_CC
- TARGET_ADJUST_LLVM_CC(CallingConvention, fntype);
-#endif
-
- SmallVector<Value*, 16> CallOperands;
- const PointerType *PFTy = cast<PointerType>(Callee->getType());
- const FunctionType *FTy = cast<FunctionType>(PFTy->getElementType());
- FunctionCallArgumentConversion Client(CallOperands, FTy, DestLoc,
- gimple_call_return_slot_opt_p(stmt),
- Builder, CallingConvention);
- DefaultABI ABIConverter(Client);
-
- // Handle the result, including struct returns.
- ABIConverter.HandleReturnType(gimple_call_return_type(stmt),
- fndecl ? fndecl : fntype,
- fndecl ? DECL_BUILT_IN(fndecl) : false);
-
- // Pass the static chain, if any, as the first parameter.
- if (gimple_call_chain(stmt))
- CallOperands.push_back(EmitMemory(gimple_call_chain(stmt)));
-
- // Loop over the arguments, expanding them and adding them to the op list.
- std::vector<const Type*> ScalarArgs;
- for (unsigned i = 0, e = gimple_call_num_args(stmt); i != e; ++i) {
- tree arg = gimple_call_arg(stmt, i);
- tree type = TREE_TYPE(arg);
- const Type *ArgTy = ConvertType(type);
-
- // Push the argument.
- if (ArgTy->isSingleValueType()) {
- // A scalar - push the value.
- Client.pushValue(EmitMemory(arg));
- } else if (LLVM_SHOULD_PASS_AGGREGATE_AS_FCA(type, ArgTy)) {
- if (AGGREGATE_TYPE_P(type)) {
- // Pass the aggregate as a first class value.
- LValue ArgVal = EmitLV(arg);
- Client.pushValue(Builder.CreateLoad(ArgVal.Ptr));
- } else {
- // Already first class (eg: a complex number) - push the value.
- Client.pushValue(EmitMemory(arg));
- }
- } else {
- if (AGGREGATE_TYPE_P(type)) {
- // An aggregate - push the address.
- LValue ArgVal = EmitLV(arg);
- assert(!ArgVal.isBitfield() && "Bitfields are first-class types!");
- Client.pushAddress(ArgVal.Ptr);
- } else {
- // A first class value (eg: a complex number). Push the address of a
- // temporary copy.
- MemRef Copy = CreateTempLoc(ArgTy);
- StoreRegisterToMemory(EmitRegister(arg), Copy, type, Builder);
- Client.pushAddress(Copy.Ptr);
- }
- }
-
- Attributes Attrs = Attribute::None;
-
- unsigned OldSize = CallOperands.size();
-
- ABIConverter.HandleArgument(type, ScalarArgs, &Attrs);
-
- if (Attrs != Attribute::None) {
- // If the argument is split into multiple scalars, assign the
- // attributes to all scalars of the aggregate.
- for (unsigned i = OldSize + 1; i <= CallOperands.size(); ++i) {
- PAL = PAL.addAttr(i, Attrs);
- }
- }
-
- Client.clear();
- }
-
- // Compile stuff like:
- // %tmp = call float (...)* bitcast (float ()* @foo to float (...)*)( )
- // to:
- // %tmp = call float @foo( )
- // This commonly occurs due to C "implicit ..." semantics.
- if (ConstantExpr *CE = dyn_cast<ConstantExpr>(Callee)) {
- if (CallOperands.empty() && CE->getOpcode() == Instruction::BitCast) {
- Constant *RealCallee = CE->getOperand(0);
- assert(RealCallee->getType()->isPointerTy() &&
- "Bitcast to ptr not from ptr?");
- const PointerType *RealPT = cast<PointerType>(RealCallee->getType());
- if (const FunctionType *RealFT =
- dyn_cast<FunctionType>(RealPT->getElementType())) {
- const PointerType *ActualPT = cast<PointerType>(Callee->getType());
- const FunctionType *ActualFT =
- cast<FunctionType>(ActualPT->getElementType());
- if (RealFT->getReturnType() == ActualFT->getReturnType() &&
- RealFT->getNumParams() == 0)
- Callee = RealCallee;
- }
- }
- }
-
- Value *Call;
- if (!LandingPad) {
- Call = Builder.CreateCall(Callee, CallOperands.begin(), CallOperands.end());
- cast<CallInst>(Call)->setCallingConv(CallingConvention);
- cast<CallInst>(Call)->setAttributes(PAL);
- } else {
- BasicBlock *NextBlock = BasicBlock::Create(Context);
- Call = Builder.CreateInvoke(Callee, NextBlock, LandingPad,
- CallOperands.begin(), CallOperands.end());
- cast<InvokeInst>(Call)->setCallingConv(CallingConvention);
- cast<InvokeInst>(Call)->setAttributes(PAL);
-
- if (LPadNo > 0) {
- // The invoke's destination may change to an LLVM only landing pad, which
- // precedes the GCC one, after phi nodes have been populated (doing things
- // this way simplifies the generation of phi nodes). Record the invoke as
- // well as the GCC exception handling region.
- if ((unsigned)LPadNo >= NormalInvokes.size())
- NormalInvokes.resize(LPadNo + 1);
- NormalInvokes[LPadNo].push_back(cast<InvokeInst>(Call));
- }
-
- BeginBlock(NextBlock);
- }
-
- if (Client.isShadowReturn())
- return Client.EmitShadowResult(gimple_call_return_type(stmt), DestLoc);
-
- if (Call->getType()->isVoidTy())
- return 0;
-
- if (Client.isAggrReturn()) {
- MemRef Target;
- if (DestLoc)
- Target = *DestLoc;
- else
- // Destination is a first class value (eg: a complex number). Extract to
- // a temporary then load the value out later.
- Target = CreateTempLoc(ConvertType(gimple_call_return_type(stmt)));
-
- if (TD.getTypeAllocSize(Call->getType()) <=
- TD.getTypeAllocSize(cast<PointerType>(Target.Ptr->getType())
- ->getElementType())) {
- Value *Dest = Builder.CreateBitCast(Target.Ptr,
- Call->getType()->getPointerTo());
- LLVM_EXTRACT_MULTIPLE_RETURN_VALUE(Call, Dest, Target.Volatile,
- Builder);
- } else {
- // The call will return an aggregate value in registers, but
- // those registers are bigger than Target. Allocate a
- // temporary to match the registers, store the registers there,
- // cast the temporary into the correct (smaller) type, and using
- // the correct type, copy the value into Target. Assume the
- // optimizer will delete the temporary and clean this up.
- AllocaInst *biggerTmp = CreateTemporary(Call->getType());
- LLVM_EXTRACT_MULTIPLE_RETURN_VALUE(Call,biggerTmp,/*Volatile=*/false,
- Builder);
- EmitAggregateCopy(Target,
- MemRef(Builder.CreateBitCast(biggerTmp,Call->getType()->
- getPointerTo()),
- Target.getAlignment(), Target.Volatile),
- gimple_call_return_type(stmt));
- }
-
- return DestLoc ? 0 : Builder.CreateLoad(Target.Ptr);
- }
-
- if (!DestLoc) {
- const Type *RetTy = ConvertType(gimple_call_return_type(stmt));
- if (Call->getType() == RetTy)
- return Call; // Normal scalar return.
-
- // May be something as simple as a float being returned as an integer, or
- // something trickier like a complex int type { i32, i32 } being returned
- // as an i64.
- if (Call->getType()->canLosslesslyBitCastTo(RetTy))
- return Builder.CreateBitCast(Call, RetTy); // Simple case.
- // Probably a scalar to complex conversion.
- assert(TD.getTypeAllocSize(Call->getType()) == TD.getTypeAllocSize(RetTy) &&
- "Size mismatch in scalar to scalar conversion!");
- Value *Tmp = CreateTemporary(Call->getType());
- Builder.CreateStore(Call, Tmp);
- return Builder.CreateLoad(Builder.CreateBitCast(Tmp,RetTy->getPointerTo()));
- }
-
- // If the caller expects an aggregate, we have a situation where the ABI for
- // the current target specifies that the aggregate be returned in scalar
- // registers even though it is an aggregate. We must bitconvert the scalar
- // to the destination aggregate type. We do this by casting the DestLoc
- // pointer and storing into it. The store does not necessarily start at the
- // beginning of the aggregate (x86-64).
- Value *Ptr = DestLoc->Ptr;
- // AggTy - The type of the aggregate being stored to.
- const Type *AggTy = cast<PointerType>(Ptr->getType())->getElementType();
- // MaxStoreSize - The maximum number of bytes we can store without overflowing
- // the aggregate.
- int64_t MaxStoreSize = TD.getTypeAllocSize(AggTy);
- if (Client.Offset) {
- Ptr = Builder.CreateBitCast(Ptr, Type::getInt8PtrTy(Context));
- Ptr = Builder.CreateGEP(Ptr,
- ConstantInt::get(TD.getIntPtrType(Context), Client.Offset));
- MaxStoreSize -= Client.Offset;
- }
- assert(MaxStoreSize > 0 && "Storing off end of aggregate?");
- Value *Val = Call;
- // Check whether storing the scalar directly would overflow the aggregate.
- if (TD.getTypeStoreSize(Call->getType()) > (uint64_t)MaxStoreSize) {
- // Chop down the size of the scalar to the maximum number of bytes that can
- // be stored without overflowing the destination.
- // TODO: Check whether this works correctly on big-endian machines.
- // Store the scalar to a temporary.
- Value *Tmp = CreateTemporary(Call->getType());
- Builder.CreateStore(Call, Tmp);
- // Load the desired number of bytes back out again as an integer of the
- // appropriate size.
- const Type *SmallTy = IntegerType::get(Context, MaxStoreSize*8);
- Tmp = Builder.CreateBitCast(Tmp, PointerType::getUnqual(SmallTy));
- Val = Builder.CreateLoad(Tmp);
- // Store the integer rather than the call result to the aggregate.
- }
- Ptr = Builder.CreateBitCast(Ptr, PointerType::getUnqual(Val->getType()));
- StoreInst *St = Builder.CreateStore(Val, Ptr, DestLoc->Volatile);
- St->setAlignment(DestLoc->getAlignment());
- return 0;
-}
-
-/// EmitSimpleCall - Emit a call to the function with the given name and return
-/// type, passing the provided arguments (which should all be gimple registers
-/// or local constants of register type). No marshalling is done: the arguments
-/// are directly passed through.
-CallInst *TreeToLLVM::EmitSimpleCall(StringRef CalleeName, tree ret_type,
- /* arguments */ ...) {
- va_list ops;
- va_start(ops, ret_type);
-
- // Build the list of arguments.
- std::vector<Value*> Args;
-#ifdef TARGET_ADJUST_LLVM_CC
- // Build the list of GCC argument types.
- tree arg_types;
- tree *chainp = &arg_types;
-#endif
- while (tree arg = va_arg(ops, tree)) {
- Args.push_back(EmitRegister(arg));
-#ifdef TARGET_ADJUST_LLVM_CC
- *chainp = build_tree_list(NULL, TREE_TYPE(arg));
- chainp = &TREE_CHAIN(*chainp);
-#endif
- }
-#ifdef TARGET_ADJUST_LLVM_CC
- // Indicate that this function is not varargs.
- *chainp = void_list_node;
-#endif
- va_end(ops);
-
- const Type *RetTy = TREE_CODE(ret_type) == VOID_TYPE ?
- Type::getVoidTy(Context) : GetRegType(ret_type);
-
- // The LLVM argument types.
- std::vector<const Type*> ArgTys;
- ArgTys.reserve(Args.size());
- for (unsigned i = 0, e = Args.size(); i != e; ++i)
- ArgTys.push_back(Args[i]->getType());
-
- // Determine the calling convention.
- CallingConv::ID CC = CallingConv::C;
-#ifdef TARGET_ADJUST_LLVM_CC
- // Query the target for the calling convention to use.
- tree fntype = build_function_type(ret_type, arg_types);
- TARGET_ADJUST_LLVM_CC(CC, fntype);
-#endif
-
- // Get the function declaration for the callee.
- const FunctionType *FTy = FunctionType::get(RetTy, ArgTys, /*isVarArg*/false);
- Constant *Func = TheModule->getOrInsertFunction(CalleeName, FTy);
-
- // If the function already existed with the wrong prototype then don't try to
- // muck with its calling convention. Otherwise, set the calling convention.
- if (Function *F = dyn_cast<Function>(Func))
- F->setCallingConv(CC);
-
- // Finally, call the function.
- CallInst *CI = Builder.CreateCall(Func, Args.begin(), Args.end());
- CI->setCallingConv(CC);
- return CI;
-}
-
-
-//===----------------------------------------------------------------------===//
-// ... Inline Assembly and Register Variables ...
-//===----------------------------------------------------------------------===//
-
-// LLVM_GET_REG_NAME - Default to use GCC's register names. Targets may
-// override this to use different names for some registers. The REG_NAME is
-// the name before it was decoded; it may be null in some contexts.
-#ifndef LLVM_GET_REG_NAME
-#define LLVM_GET_REG_NAME(REG_NAME, REG_NUM) reg_names[REG_NUM]
-#endif
-
-// LLVM_CANONICAL_ADDRESS_CONSTRAINTS - GCC defines the "p" constraint to
-// allow a valid memory address, but targets differ widely on what is allowed
-// as an address. This macro is a string containing the canonical constraint
-// characters that are conservatively valid addresses. Default to allowing an
-// address in a register, since that works for many targets.
-#ifndef LLVM_CANONICAL_ADDRESS_CONSTRAINTS
-#define LLVM_CANONICAL_ADDRESS_CONSTRAINTS "r"
-#endif
-
-/// Reads from register variables are handled by emitting an inline asm node
-/// that copies the value out of the specified register.
-Value *TreeToLLVM::EmitReadOfRegisterVariable(tree decl) {
- const Type *MemTy = ConvertType(TREE_TYPE(decl));
- const Type *RegTy = GetRegType(TREE_TYPE(decl));
-
- // If there was an error, return something bogus.
- if (ValidateRegisterVariable(decl))
- return UndefValue::get(RegTy);
-
- // Turn this into a 'tmp = call Ty asm "", "={reg}"()'.
- FunctionType *FTy = FunctionType::get(MemTy, std::vector<const Type*>(),
- false);
-
- const char *Name = extractRegisterName(decl);
- Name = LLVM_GET_REG_NAME(Name, decode_reg_name(Name));
-
- InlineAsm *IA = InlineAsm::get(FTy, "", "={"+std::string(Name)+"}", true);
- CallInst *Call = Builder.CreateCall(IA);
- Call->setDoesNotThrow();
-
- // Convert the call result to in-register type.
- return Mem2Reg(Call, TREE_TYPE(decl), Builder);
-}
-
-/// Stores to register variables are handled by emitting an inline asm node
-/// that copies the value into the specified register.
-void TreeToLLVM::EmitModifyOfRegisterVariable(tree decl, Value *RHS) {
- // If there was an error, bail out.
- if (ValidateRegisterVariable(decl))
- return;
-
- // Convert to in-memory type.
- RHS = Reg2Mem(RHS, TREE_TYPE(decl), Builder);
-
- // Turn this into a 'call void asm sideeffect "", "{reg}"(Ty %RHS)'.
- std::vector<const Type*> ArgTys;
- ArgTys.push_back(RHS->getType());
- FunctionType *FTy = FunctionType::get(Type::getVoidTy(Context), ArgTys,
- false);
-
- const char *Name = extractRegisterName(decl);
- Name = LLVM_GET_REG_NAME(Name, decode_reg_name(Name));
-
- InlineAsm *IA = InlineAsm::get(FTy, "", "{"+std::string(Name)+"}", true);
- CallInst *Call = Builder.CreateCall(IA, RHS);
- Call->setDoesNotThrow();
-}
-
-/// ConvertInlineAsmStr - Convert the specified inline asm string to an LLVM
-/// InlineAsm string. The GNU style inline asm template string has the
-/// following format:
-/// %N (for N a digit) means print operand N in usual manner.
-/// %= means a unique number for the inline asm.
-/// %lN means require operand N to be a CODE_LABEL or LABEL_REF
-/// and print the label name with no punctuation.
-/// %cN means require operand N to be a constant
-/// and print the constant expression with no punctuation.
-/// %aN means expect operand N to be a memory address
-/// (not a memory reference!) and print a reference to that address.
-/// %nN means expect operand N to be a constant and print a constant
-/// expression for minus the value of the operand, with no other
-/// punctuation.
-/// Other %xN expressions are turned into LLVM ${N:x} operands.
-///
-static std::string ConvertInlineAsmStr(gimple stmt, unsigned NumOperands) {
- const char *AsmStr = gimple_asm_string(stmt);
-
- // gimple_asm_input_p - This flag is set if this is a non-extended ASM,
- // which means that the asm string should not be interpreted, other than
- // to escape $'s.
- if (gimple_asm_input_p(stmt)) {
- const char *InStr = AsmStr;
- std::string Result;
- while (1) {
- switch (*InStr++) {
- case 0: return Result; // End of string.
- default: Result += InStr[-1]; break; // Normal character.
- case '$': Result += "$$"; break; // Escape '$' characters.
- }
- }
- }
-
- std::string Result;
- while (1) {
- switch (*AsmStr++) {
- case 0: return Result; // End of string.
- default: Result += AsmStr[-1]; break; // Normal character.
- case '$': Result += "$$"; break; // Escape '$' characters.
-#ifdef ASSEMBLER_DIALECT
- // Note that we can't escape to ${, because that is the syntax for vars.
- case '{': Result += "$("; break; // Escape '{' character.
- case '}': Result += "$)"; break; // Escape '}' character.
- case '|': Result += "$|"; break; // Escape '|' character.
-#endif
- case '%': // GCC escape character.
- char EscapedChar = *AsmStr++;
- if (EscapedChar == '%') { // Escaped '%' character
- Result += '%';
- } else if (EscapedChar == '=') { // Unique ID for the asm instance.
- Result += "${:uid}";
- }
-#ifdef LLVM_ASM_EXTENSIONS
- LLVM_ASM_EXTENSIONS(EscapedChar, AsmStr, Result)
-#endif
- else if (ISALPHA(EscapedChar)) {
- // % followed by a letter and some digits. This outputs an operand in a
- // special way depending on the letter. We turn this into LLVM ${N:o}
- // syntax.
- char *EndPtr;
- unsigned long OpNum = strtoul(AsmStr, &EndPtr, 10);
-
- if (AsmStr == EndPtr) {
- error_at(gimple_location(stmt),
- "operand number missing after %%-letter");
- return Result;
- } else if (OpNum >= NumOperands) {
- error_at(gimple_location(stmt), "operand number out of range");
- return Result;
- }
- Result += "${" + utostr(OpNum) + ":" + EscapedChar + "}";
- AsmStr = EndPtr;
- } else if (ISDIGIT(EscapedChar)) {
- char *EndPtr;
- unsigned long OpNum = strtoul(AsmStr-1, &EndPtr, 10);
- AsmStr = EndPtr;
- Result += "$" + utostr(OpNum);
-#ifdef PRINT_OPERAND_PUNCT_VALID_P
- } else if (PRINT_OPERAND_PUNCT_VALID_P((unsigned char)EscapedChar)) {
- Result += "${:";
- Result += EscapedChar;
- Result += "}";
-#endif
- } else {
- output_operand_lossage("invalid %%-code");
- }
- break;
- }
- }
-}
-
-/// isOperandMentioned - Return true if the given operand is explicitly
-/// mentioned in the asm string. For example if passed operand 1 then
-/// this routine checks that the asm string does not contain "%1".
-static bool isOperandMentioned(gimple stmt, unsigned OpNum) {
- // If this is a non-extended ASM then the contents of the asm string are not
- // to be interpreted.
- if (gimple_asm_input_p(stmt))
- return false;
- // Search for a non-escaped '%' character followed by OpNum.
- for (const char *AsmStr = gimple_asm_string(stmt); *AsmStr; ++AsmStr) {
- if (*AsmStr != '%')
- // Not a '%', move on to next character.
- continue;
- char Next = AsmStr[1];
- // If this is "%%" then the '%' is escaped - skip both '%' characters.
- if (Next == '%') {
- ++AsmStr;
- continue;
- }
- // Whitespace is not allowed between the '%' and the number, so check that
- // the next character is a digit.
- if (!ISDIGIT(Next))
- continue;
- char *EndPtr;
- // If this is an explicit reference to OpNum then we are done.
- if (OpNum == strtoul(AsmStr+1, &EndPtr, 10))
- return true;
- // Otherwise, skip over the number and keep scanning.
- AsmStr = EndPtr - 1;
- }
- return false;
-}
-
-/// CanonicalizeConstraint - If we can canonicalize the constraint into
-/// something simpler, do so now. This turns register classes with a single
-/// register into the register itself, expands builtin constraints to multiple
-/// alternatives, etc.
-static std::string CanonicalizeConstraint(const char *Constraint) {
- std::string Result;
-
- // Skip over modifier characters.
- bool DoneModifiers = false;
- while (!DoneModifiers) {
- switch (*Constraint) {
- default: DoneModifiers = true; break;
- case '=': assert(0 && "Should be after '='s");
- case '+': assert(0 && "'+' should already be expanded");
- case '*':
- case '?':
- case '!':
- ++Constraint;
- break;
- case '&': // Pass earlyclobber to LLVM.
- case '%': // Pass commutative to LLVM.
- Result += *Constraint++;
- break;
- case '#': // No constraint letters left.
- return Result;
- }
- }
-
- while (*Constraint) {
- char ConstraintChar = *Constraint++;
-
- // 'g' is just short-hand for 'imr'.
- if (ConstraintChar == 'g') {
- Result += "imr";
- continue;
- }
-
- // Translate 'p' to a target-specific set of constraints that
- // conservatively allow a valid memory address. For inline assembly there
- // is no way to know the mode of the data being addressed, so this is only
- // a rough approximation of how GCC handles this constraint.
- if (ConstraintChar == 'p') {
- Result += LLVM_CANONICAL_ADDRESS_CONSTRAINTS;
- continue;
- }
-
- // See if this is a regclass constraint.
- unsigned RegClass;
- if (ConstraintChar == 'r')
- // REG_CLASS_FROM_CONSTRAINT doesn't support 'r' for some reason.
- RegClass = GENERAL_REGS;
- else
- RegClass = REG_CLASS_FROM_CONSTRAINT(Constraint[-1], Constraint-1);
-
- if (RegClass == NO_REGS) { // not a reg class.
- Result += ConstraintChar;
- continue;
- }
-
- // Look to see if the specified regclass has exactly one member, and if so,
- // what it is. Cache this information in AnalyzedRegClasses once computed.
- static std::map<unsigned, int> AnalyzedRegClasses;
-
- std::map<unsigned, int>::iterator I =
- AnalyzedRegClasses.lower_bound(RegClass);
-
- int RegMember;
- if (I != AnalyzedRegClasses.end() && I->first == RegClass) {
- // We've already computed this, reuse value.
- RegMember = I->second;
- } else {
- // Otherwise, scan the regclass, looking for exactly one member.
- RegMember = -1; // -1 => not a single-register class.
- for (unsigned j = 0; j != FIRST_PSEUDO_REGISTER; ++j)
- if (TEST_HARD_REG_BIT(reg_class_contents[RegClass], j)) {
- if (RegMember == -1) {
- RegMember = j;
- } else {
- RegMember = -1;
- break;
- }
- }
- // Remember this answer for the next query of this regclass.
- AnalyzedRegClasses.insert(I, std::make_pair(RegClass, RegMember));
- }
-
- // If we found a single register register class, return the register.
- if (RegMember != -1) {
- Result += '{';
- Result += LLVM_GET_REG_NAME(0, RegMember);
- Result += '}';
- } else {
- Result += ConstraintChar;
- }
- }
-
- return Result;
-}
-
-/// See if operand "exp" can use the indicated Constraint (which is
-/// terminated by a null or a comma).
-/// Returns: -1=no, 0=yes but auxiliary instructions needed, 1=yes and free
-static int MatchWeight(const char *Constraint, tree Operand) {
- const char *p = Constraint;
- int RetVal = 0;
- // Look for hard register operand. This matches only a constraint of a
- // register class that includes that hard register, and it matches that
- // perfectly, so we never return 0 in this case.
- if (TREE_CODE(Operand) == VAR_DECL && DECL_HARD_REGISTER(Operand)) {
- int RegNum = decode_reg_name(extractRegisterName(Operand));
- RetVal = -1;
- if (RegNum >= 0) {
- do {
- unsigned RegClass;
- if (*p == 'r')
- RegClass = GENERAL_REGS;
- else
- RegClass = REG_CLASS_FROM_CONSTRAINT(*p, p);
- if (RegClass != NO_REGS &&
- TEST_HARD_REG_BIT(reg_class_contents[RegClass], RegNum)) {
- RetVal = 1;
- break;
- }
- ++p;
- } while (*p != ',' && *p != 0);
- }
- }
- // Look for integer constant operand. This cannot match "m", and "i" is
- // better than "r". FIXME target-dependent immediate letters are not handled
- // yet; in general they require looking at the value.
- if (TREE_CODE(Operand) == INTEGER_CST) {
- do {
- RetVal = -1;
- if (*p == 'i' || *p == 'n') { // integer constant
- RetVal = 1;
- break;
- }
- if (*p != 'm' && *p != 'o' && *p != 'V') // not memory
- RetVal = 0;
- ++p;
- } while (*p != ',' && *p != 0);
- }
- /// TEMPORARY. This has the effect that alternative 0 is always chosen,
- /// except in the cases handled above.
- return RetVal;
-}
-
-/// ChooseConstraintTuple: we know each of the NumInputs+NumOutputs strings
-/// in Constraints[] is a comma-separated list of NumChoices different
-/// constraints. Look through the operands and constraint possibilities
-/// and pick a tuple where all the operands match. Replace the strings
-/// in Constraints[] with the shorter strings from that tuple (malloc'ed,
-/// caller is responsible for cleaning it up). Later processing can alter what
-/// Constraints points to, so to make sure we delete everything, the addresses
-/// of everything we allocated also are returned in StringStorage.
-/// Casting back and forth from char* to const char* is Ugly, but we have to
-/// interface with C code that expects const char*.
-///
-/// gcc's algorithm for picking "the best" tuple is quite complicated, and
-/// is performed after things like SROA, not before. At the moment we are
-/// just trying to pick one that will work. This may get refined.
-static void ChooseConstraintTuple(gimple stmt, const char **Constraints,
- unsigned NumChoices,
- BumpPtrAllocator &StringStorage) {
- unsigned NumInputs = gimple_asm_ninputs(stmt);
- unsigned NumOutputs = gimple_asm_noutputs(stmt);
-
- int MaxWeight = -1;
- unsigned int CommasToSkip = 0;
- int *Weights = (int *)alloca(NumChoices * sizeof(int));
- // RunningConstraints is pointers into the Constraints strings which
- // are incremented as we go to point to the beginning of each
- // comma-separated alternative.
- const char** RunningConstraints =
- (const char**)alloca((NumInputs+NumOutputs)*sizeof(const char*));
- memcpy(RunningConstraints, Constraints,
- (NumInputs+NumOutputs) * sizeof(const char*));
- // The entire point of this loop is to compute CommasToSkip.
- for (unsigned i = 0; i != NumChoices; ++i) {
- Weights[i] = 0;
- for (unsigned j = 0; j != NumOutputs; ++j) {
- tree Output = gimple_asm_output_op(stmt, j);
- if (i==0)
- RunningConstraints[j]++; // skip leading =
- const char* p = RunningConstraints[j];
- while (*p=='*' || *p=='&' || *p=='%') // skip modifiers
- p++;
- if (Weights[i] != -1) {
- int w = MatchWeight(p, TREE_VALUE(Output));
- // Nonmatch means the entire tuple doesn't match. However, we
- // keep scanning to set up RunningConstraints correctly for the
- // next tuple.
- if (w < 0)
- Weights[i] = -1;
- else
- Weights[i] += w;
- }
- while (*p!=0 && *p!=',')
- p++;
- if (*p!=0) {
- p++; // skip comma
- while (*p=='*' || *p=='&' || *p=='%')
- p++; // skip modifiers
- }
- RunningConstraints[j] = p;
- }
- for (unsigned j = 0; j != NumInputs; ++j) {
- tree Input = gimple_asm_input_op(stmt, j);
- const char* p = RunningConstraints[NumOutputs + j];
- if (Weights[i] != -1) {
- int w = MatchWeight(p, TREE_VALUE(Input));
- if (w < 0)
- Weights[i] = -1; // As above.
- else
- Weights[i] += w;
- }
- while (*p!=0 && *p!=',')
- p++;
- if (*p!=0)
- p++;
- RunningConstraints[NumOutputs + j] = p;
- }
- if (Weights[i]>MaxWeight) {
- CommasToSkip = i;
- MaxWeight = Weights[i];
- }
- }
- // We have picked an alternative (the CommasToSkip'th one).
- // Change Constraints to point to malloc'd copies of the appropriate
- // constraints picked out of the original strings.
- for (unsigned int i=0; i<NumInputs+NumOutputs; i++) {
- assert(*(RunningConstraints[i])==0); // sanity check
- const char* start = Constraints[i];
- if (i<NumOutputs)
- start++; // skip '=' or '+'
- const char* end = start;
- while (*end != ',' && *end != 0)
- end++;
- for (unsigned int j=0; j<CommasToSkip; j++) {
- start = end+1;
- end = start;
- while (*end != ',' && *end != 0)
- end++;
- }
- // String we want is at start..end-1 inclusive.
- // For outputs, copy the leading = or +.
- char *newstring;
- if (i<NumOutputs) {
- newstring = StringStorage.Allocate<char>(end-start+1+1);
- newstring[0] = *(Constraints[i]);
- strncpy(newstring+1, start, end-start);
- newstring[end-start+1] = 0;
- } else {
- newstring = StringStorage.Allocate<char>(end-start+1);
- strncpy(newstring, start, end-start);
- newstring[end-start] = 0;
- }
- Constraints[i] = (const char *)newstring;
- }
-}
-
-
-//===----------------------------------------------------------------------===//
-// ... Helpers for Builtin Function Expansion ...
-//===----------------------------------------------------------------------===//
-
-Value *TreeToLLVM::BuildVector(const std::vector<Value*> &Ops) {
- assert((Ops.size() & (Ops.size()-1)) == 0 &&
- "Not a power-of-two sized vector!");
- bool AllConstants = true;
- for (unsigned i = 0, e = Ops.size(); i != e && AllConstants; ++i)
- AllConstants &= isa<Constant>(Ops[i]);
-
- // If this is a constant vector, create a ConstantVector.
- if (AllConstants) {
- std::vector<Constant*> CstOps;
- for (unsigned i = 0, e = Ops.size(); i != e; ++i)
- CstOps.push_back(cast<Constant>(Ops[i]));
- return ConstantVector::get(CstOps);
- }
-
- // Otherwise, insertelement the values to build the vector.
- Value *Result =
- UndefValue::get(VectorType::get(Ops[0]->getType(), Ops.size()));
-
- for (unsigned i = 0, e = Ops.size(); i != e; ++i)
- Result = Builder.CreateInsertElement(Result, Ops[i],
- ConstantInt::get(Type::getInt32Ty(Context), i));
-
- return Result;
-}
-
-/// BuildVector - This varargs function builds a literal vector ({} syntax) with
-/// the specified null-terminated list of elements. The elements must be all
-/// the same element type and there must be a power of two of them.
-Value *TreeToLLVM::BuildVector(Value *Elt, ...) {
- std::vector<Value*> Ops;
- va_list VA;
- va_start(VA, Elt);
-
- Ops.push_back(Elt);
- while (Value *Arg = va_arg(VA, Value *))
- Ops.push_back(Arg);
- va_end(VA);
-
- return BuildVector(Ops);
-}
-
-/// BuildVectorShuffle - Given two vectors and a variable length list of int
-/// constants, create a shuffle of the elements of the inputs, where each dest
-/// is specified by the indexes. The int constant list must be as long as the
-/// number of elements in the input vector.
-///
-/// Undef values may be specified by passing in -1 as the result value.
-///
-Value *TreeToLLVM::BuildVectorShuffle(Value *InVec1, Value *InVec2, ...) {
- assert(InVec1->getType()->isVectorTy() &&
- InVec1->getType() == InVec2->getType() && "Invalid shuffle!");
- unsigned NumElements = cast<VectorType>(InVec1->getType())->getNumElements();
-
- // Get all the indexes from varargs.
- std::vector<Constant*> Idxs;
- va_list VA;
- va_start(VA, InVec2);
- for (unsigned i = 0; i != NumElements; ++i) {
- int idx = va_arg(VA, int);
- if (idx == -1)
- Idxs.push_back(UndefValue::get(Type::getInt32Ty(Context)));
- else {
- assert((unsigned)idx < 2*NumElements && "Element index out of range!");
- Idxs.push_back(ConstantInt::get(Type::getInt32Ty(Context), idx));
- }
- }
- va_end(VA);
-
- // Turn this into the appropriate shuffle operation.
- return Builder.CreateShuffleVector(InVec1, InVec2,
- ConstantVector::get(Idxs));
-}
-
-//===----------------------------------------------------------------------===//
-// ... Builtin Function Expansion ...
-//===----------------------------------------------------------------------===//
-
-/// EmitFrontendExpandedBuiltinCall - We allow the target to do some amount
-/// of lowering. This allows us to avoid having intrinsics for operations that
-/// directly correspond to LLVM constructs.
-///
-/// This method returns true if the builtin is handled, otherwise false.
-///
-bool TreeToLLVM::EmitFrontendExpandedBuiltinCall(gimple stmt, tree fndecl,
- const MemRef *DestLoc,
- Value *&Result) {
-#ifdef LLVM_TARGET_INTRINSIC_LOWER
- // Get the result type and operand line in an easy to consume format.
- const Type *ResultType = ConvertType(TREE_TYPE(TREE_TYPE(fndecl)));
- std::vector<Value*> Operands;
- for (unsigned i = 0, e = gimple_call_num_args(stmt); i != e; ++i) {
- tree OpVal = gimple_call_arg(stmt, i);
- if (AGGREGATE_TYPE_P(TREE_TYPE(OpVal))) {
- MemRef OpLoc = CreateTempLoc(ConvertType(TREE_TYPE(OpVal)));
- EmitAggregate(OpVal, OpLoc);
- Operands.push_back(Builder.CreateLoad(OpLoc.Ptr));
- } else {
- Operands.push_back(EmitMemory(OpVal));
- }
- }
-
- return LLVM_TARGET_INTRINSIC_LOWER(stmt, fndecl, DestLoc, Result, ResultType,
- Operands);
-#endif
- return false;
-}
-
-/// TargetBuiltinCache - A cache of builtin intrinsics indexed by the GCC
-/// builtin number.
-static std::vector<Constant*> TargetBuiltinCache;
-
-void TreeToLLVM::EmitMemoryBarrier(bool ll, bool ls, bool sl, bool ss,
- bool device) {
- Value* C[5];
- C[0] = ConstantInt::get(Type::getInt1Ty(Context), ll);
- C[1] = ConstantInt::get(Type::getInt1Ty(Context), ls);
- C[2] = ConstantInt::get(Type::getInt1Ty(Context), sl);
- C[3] = ConstantInt::get(Type::getInt1Ty(Context), ss);
- C[4] = ConstantInt::get(Type::getInt1Ty(Context), device);
-
- Builder.CreateCall(Intrinsic::getDeclaration(TheModule,
- Intrinsic::memory_barrier),
- C, C + 5);
-}
-
-Value *
-TreeToLLVM::BuildBinaryAtomicBuiltin(gimple stmt, Intrinsic::ID id) {
- tree return_type = gimple_call_return_type(stmt);
- const Type *ResultTy = ConvertType(return_type);
- Value* C[2] = {
- EmitMemory(gimple_call_arg(stmt, 0)),
- EmitMemory(gimple_call_arg(stmt, 1))
- };
- const Type* Ty[2];
- Ty[0] = ResultTy;
- Ty[1] = ResultTy->getPointerTo();
- C[0] = Builder.CreateBitCast(C[0], Ty[1]);
- C[1] = Builder.CreateIntCast(C[1], Ty[0],
- /*isSigned*/!TYPE_UNSIGNED(return_type),
- "cast");
- // The gcc builtins are also full memory barriers.
- // FIXME: __sync_lock_test_and_set and __sync_lock_release require less.
-#if defined(TARGET_ARM) && defined(CONFIG_DARWIN_H)
- EmitMemoryBarrier(true, true, true, true, false);
-#else
- EmitMemoryBarrier(true, true, true, true, true);
-#endif
-
- Value *Result =
- Builder.CreateCall(Intrinsic::getDeclaration(TheModule, id,
- Ty, 2),
- C, C + 2);
-
- // The gcc builtins are also full memory barriers.
- // FIXME: __sync_lock_test_and_set and __sync_lock_release require less.
-#if defined(TARGET_ARM) && defined(CONFIG_DARWIN_H)
- EmitMemoryBarrier(true, true, true, true, false);
-#else
- EmitMemoryBarrier(true, true, true, true, true);
-#endif
-
- Result = Builder.CreateIntToPtr(Result, ResultTy);
- return Result;
-}
-
-Value *
-TreeToLLVM::BuildCmpAndSwapAtomicBuiltin(gimple stmt, tree type, bool isBool) {
- const Type *ResultTy = ConvertType(type);
- Value* C[3] = {
- EmitMemory(gimple_call_arg(stmt, 0)),
- EmitMemory(gimple_call_arg(stmt, 1)),
- EmitMemory(gimple_call_arg(stmt, 2))
- };
- const Type* Ty[2];
- Ty[0] = ResultTy;
- Ty[1] = ResultTy->getPointerTo();
- C[0] = Builder.CreateBitCast(C[0], Ty[1]);
- C[1] = Builder.CreateIntCast(C[1], Ty[0], /*isSigned*/!TYPE_UNSIGNED(type),
- "cast");
- C[2] = Builder.CreateIntCast(C[2], Ty[0], /*isSigned*/!TYPE_UNSIGNED(type),
- "cast");
-
- // The gcc builtins are also full memory barriers.
- // FIXME: __sync_lock_test_and_set and __sync_lock_release require less.
-#if defined(TARGET_ARM) && defined(CONFIG_DARWIN_H)
- EmitMemoryBarrier(true, true, true, true, false);
-#else
- EmitMemoryBarrier(true, true, true, true, true);
-#endif
-
- Value *Result =
- Builder.CreateCall(Intrinsic::getDeclaration(TheModule,
- Intrinsic::atomic_cmp_swap,
- Ty, 2),
- C, C + 3);
-
- // The gcc builtins are also full memory barriers.
- // FIXME: __sync_lock_test_and_set and __sync_lock_release require less.
-#if defined(TARGET_ARM) && defined(CONFIG_DARWIN_H)
- EmitMemoryBarrier(true, true, true, true, false);
-#else
- EmitMemoryBarrier(true, true, true, true, true);
-#endif
-
- if (isBool)
- Result = Builder.CreateIntCast(Builder.CreateICmpEQ(Result, C[1]),
- ConvertType(boolean_type_node),
- /*isSigned*/false);
- else
- Result = Builder.CreateIntToPtr(Result, ResultTy);
- return Result;
-}
-
-/// EmitBuiltinCall - stmt is a call to fndecl, a builtin function. Try to emit
-/// the call in a special way, setting Result to the scalar result if necessary.
-/// If we can't handle the builtin, return false, otherwise return true.
-bool TreeToLLVM::EmitBuiltinCall(gimple stmt, tree fndecl,
- const MemRef *DestLoc, Value *&Result) {
- if (DECL_BUILT_IN_CLASS(fndecl) == BUILT_IN_MD) {
- unsigned FnCode = DECL_FUNCTION_CODE(fndecl);
- if (TargetBuiltinCache.size() <= FnCode)
- TargetBuiltinCache.resize(FnCode+1);
-
- // If we haven't converted this intrinsic over yet, do so now.
- if (TargetBuiltinCache[FnCode] == 0) {
- const char *TargetPrefix = "";
-#ifdef LLVM_TARGET_INTRINSIC_PREFIX
- TargetPrefix = LLVM_TARGET_INTRINSIC_PREFIX;
-#endif
- // If the backend has some special code to lower, go ahead and try to
- // do that first.
- if (EmitFrontendExpandedBuiltinCall(stmt, fndecl, DestLoc, Result))
- return true;
-
- // If this builtin directly corresponds to an LLVM intrinsic, get the
- // IntrinsicID now.
- const char *BuiltinName = IDENTIFIER_POINTER(DECL_NAME(fndecl));
- Intrinsic::ID IntrinsicID =
- Intrinsic::getIntrinsicForGCCBuiltin(TargetPrefix, BuiltinName);
- if (IntrinsicID == Intrinsic::not_intrinsic) {
- error_at(gimple_location(stmt),
- "unsupported target builtin %<%s%> used", BuiltinName);
- const Type *ResTy = ConvertType(gimple_call_return_type(stmt));
- if (ResTy->isSingleValueType())
- Result = UndefValue::get(ResTy);
- return true;
- }
-
- // Finally, map the intrinsic ID back to a name.
- TargetBuiltinCache[FnCode] =
- Intrinsic::getDeclaration(TheModule, IntrinsicID);
- }
-
- Result = EmitCallOf(TargetBuiltinCache[FnCode], stmt, DestLoc,
- AttrListPtr());
- return true;
- }
-
- enum built_in_function fcode = DECL_FUNCTION_CODE(fndecl);
- switch (fcode) {
- default: return false;
- // Varargs builtins.
- case BUILT_IN_VA_START: return EmitBuiltinVAStart(stmt);
- case BUILT_IN_VA_END: return EmitBuiltinVAEnd(stmt);
- case BUILT_IN_VA_COPY: return EmitBuiltinVACopy(stmt);
- case BUILT_IN_CONSTANT_P: return EmitBuiltinConstantP(stmt, Result);
- case BUILT_IN_ALLOCA: return EmitBuiltinAlloca(stmt, Result);
- case BUILT_IN_EXTEND_POINTER: return EmitBuiltinExtendPointer(stmt, Result);
- case BUILT_IN_EXPECT: return EmitBuiltinExpect(stmt, Result);
- case BUILT_IN_MEMCPY: return EmitBuiltinMemCopy(stmt, Result,
- false, false);
- case BUILT_IN_MEMCPY_CHK: return EmitBuiltinMemCopy(stmt, Result,
- false, true);
- case BUILT_IN_MEMMOVE: return EmitBuiltinMemCopy(stmt, Result,
- true, false);
- case BUILT_IN_MEMMOVE_CHK: return EmitBuiltinMemCopy(stmt, Result,
- true, true);
- case BUILT_IN_MEMSET: return EmitBuiltinMemSet(stmt, Result, false);
- case BUILT_IN_MEMSET_CHK: return EmitBuiltinMemSet(stmt, Result, true);
- case BUILT_IN_BZERO: return EmitBuiltinBZero(stmt, Result);
- case BUILT_IN_PREFETCH: return EmitBuiltinPrefetch(stmt);
- case BUILT_IN_FRAME_ADDRESS: return EmitBuiltinReturnAddr(stmt, Result,true);
- case BUILT_IN_RETURN_ADDRESS:
- return EmitBuiltinReturnAddr(stmt, Result,false);
- case BUILT_IN_STACK_SAVE: return EmitBuiltinStackSave(stmt, Result);
- case BUILT_IN_STACK_RESTORE: return EmitBuiltinStackRestore(stmt);
- case BUILT_IN_EXTRACT_RETURN_ADDR:
- return EmitBuiltinExtractReturnAddr(stmt, Result);
- case BUILT_IN_FROB_RETURN_ADDR:
- return EmitBuiltinFrobReturnAddr(stmt, Result);
- case BUILT_IN_ADJUST_TRAMPOLINE:
- return EmitBuiltinAdjustTrampoline(stmt, Result);
- case BUILT_IN_INIT_TRAMPOLINE:
- return EmitBuiltinInitTrampoline(stmt, Result);
-
- // Exception handling builtins.
- case BUILT_IN_EH_POINTER:
- return EmitBuiltinEHPointer(stmt, Result);
-
- // Builtins used by the exception handling runtime.
- case BUILT_IN_DWARF_CFA:
- return EmitBuiltinDwarfCFA(stmt, Result);
-#ifdef DWARF2_UNWIND_INFO
- case BUILT_IN_DWARF_SP_COLUMN:
- return EmitBuiltinDwarfSPColumn(stmt, Result);
- case BUILT_IN_INIT_DWARF_REG_SIZES:
- return EmitBuiltinInitDwarfRegSizes(stmt, Result);
-#endif
- case BUILT_IN_EH_RETURN:
- return EmitBuiltinEHReturn(stmt, Result);
-#ifdef EH_RETURN_DATA_REGNO
- case BUILT_IN_EH_RETURN_DATA_REGNO:
- return EmitBuiltinEHReturnDataRegno(stmt, Result);
-#endif
- case BUILT_IN_UNWIND_INIT:
- return EmitBuiltinUnwindInit(stmt, Result);
-
- case BUILT_IN_OBJECT_SIZE: {
- if (!validate_gimple_arglist(stmt, POINTER_TYPE, INTEGER_TYPE, VOID_TYPE)) {
- error("Invalid builtin_object_size argument types");
- return false;
- }
- tree ObjSizeTree = gimple_call_arg(stmt, 1);
- STRIP_NOPS (ObjSizeTree);
- if (TREE_CODE (ObjSizeTree) != INTEGER_CST
- || tree_int_cst_sgn (ObjSizeTree) < 0
- || compare_tree_int (ObjSizeTree, 3) > 0) {
- error("Invalid second builtin_object_size argument");
- return false;
- }
-
- // LLVM doesn't handle type 1 or type 3. Deal with that here.
- Value *Tmp = EmitMemory(gimple_call_arg(stmt, 1));
-
- ConstantInt *CI = cast<ConstantInt>(Tmp);
-
- // Clear the bottom bit since we only handle whole objects and shift to turn
- // the second bit into our boolean.
- uint64_t val = (CI->getZExtValue() & 0x2) >> 1;
-
- Value *NewTy = ConstantInt::get(Tmp->getType(), val);
-
- Value* Args[] = {
- EmitMemory(gimple_call_arg(stmt, 0)),
- NewTy
- };
-
- // Grab the current return type.
- const Type* Ty = ConvertType(gimple_call_return_type(stmt));
-
- // Manually coerce the arg to the correct pointer type.
- Args[0] = Builder.CreateBitCast(Args[0], Type::getInt8PtrTy(Context));
- Args[1] = Builder.CreateIntCast(Args[1], Type::getInt1Ty(Context),
- /*isSigned*/false);
-
- Result = Builder.CreateCall(Intrinsic::getDeclaration(TheModule,
- Intrinsic::objectsize,
- &Ty,
- 1),
- Args, Args + 2);
- return true;
- }
- // Unary bit counting intrinsics.
- // NOTE: do not merge these case statements. That will cause the memoized
- // Function* to be incorrectly shared across the different typed functions.
- case BUILT_IN_CLZ: // These GCC builtins always return int.
- case BUILT_IN_CLZL:
- case BUILT_IN_CLZLL: {
- Value *Amt = EmitMemory(gimple_call_arg(stmt, 0));
- EmitBuiltinUnaryOp(Amt, Result, Intrinsic::ctlz);
- tree return_type = gimple_call_return_type(stmt);
- const Type *DestTy = ConvertType(return_type);
- Result = Builder.CreateIntCast(Result, DestTy,
- /*isSigned*/!TYPE_UNSIGNED(return_type),
- "cast");
- return true;
- }
- case BUILT_IN_CTZ: // These GCC builtins always return int.
- case BUILT_IN_CTZL:
- case BUILT_IN_CTZLL: {
- Value *Amt = EmitMemory(gimple_call_arg(stmt, 0));
- EmitBuiltinUnaryOp(Amt, Result, Intrinsic::cttz);
- tree return_type = gimple_call_return_type(stmt);
- const Type *DestTy = ConvertType(return_type);
- Result = Builder.CreateIntCast(Result, DestTy,
- /*isSigned*/!TYPE_UNSIGNED(return_type),
- "cast");
- return true;
- }
- case BUILT_IN_PARITYLL:
- case BUILT_IN_PARITYL:
- case BUILT_IN_PARITY: {
- Value *Amt = EmitMemory(gimple_call_arg(stmt, 0));
- EmitBuiltinUnaryOp(Amt, Result, Intrinsic::ctpop);
- Result = Builder.CreateBinOp(Instruction::And, Result,
- ConstantInt::get(Result->getType(), 1));
- tree return_type = gimple_call_return_type(stmt);
- const Type *DestTy = ConvertType(return_type);
- Result = Builder.CreateIntCast(Result, DestTy,
- /*isSigned*/!TYPE_UNSIGNED(return_type),
- "cast");
- return true;
- }
- case BUILT_IN_POPCOUNT: // These GCC builtins always return int.
- case BUILT_IN_POPCOUNTL:
- case BUILT_IN_POPCOUNTLL: {
- Value *Amt = EmitMemory(gimple_call_arg(stmt, 0));
- EmitBuiltinUnaryOp(Amt, Result, Intrinsic::ctpop);
- tree return_type = gimple_call_return_type(stmt);
- const Type *DestTy = ConvertType(return_type);
- Result = Builder.CreateIntCast(Result, DestTy,
- /*isSigned*/!TYPE_UNSIGNED(return_type),
- "cast");
- return true;
- }
- case BUILT_IN_BSWAP32:
- case BUILT_IN_BSWAP64: {
- Value *Amt = EmitMemory(gimple_call_arg(stmt, 0));
- EmitBuiltinUnaryOp(Amt, Result, Intrinsic::bswap);
- tree return_type = gimple_call_return_type(stmt);
- const Type *DestTy = ConvertType(return_type);
- Result = Builder.CreateIntCast(Result, DestTy,
- /*isSigned*/!TYPE_UNSIGNED(return_type),
- "cast");
- return true;
- }
-
- case BUILT_IN_SQRT:
- case BUILT_IN_SQRTF:
- case BUILT_IN_SQRTL:
- // The result of sqrt(negative) is implementation-defined, but follows
- // IEEE754 in most current implementations. llvm.sqrt, which has undefined
- // behavior for such inputs, is an inappropriate substitute.
- break;
- case BUILT_IN_POWI:
- case BUILT_IN_POWIF:
- case BUILT_IN_POWIL:
- Result = EmitBuiltinPOWI(stmt);
- return true;
- case BUILT_IN_POW:
- case BUILT_IN_POWF:
- case BUILT_IN_POWL:
- // If errno math has been disabled, expand these to llvm.pow calls.
- if (!flag_errno_math) {
- Result = EmitBuiltinPOW(stmt);
- return true;
- }
- break;
- case BUILT_IN_LOG:
- case BUILT_IN_LOGF:
- case BUILT_IN_LOGL:
- // If errno math has been disabled, expand these to llvm.log calls.
- if (!flag_errno_math) {
- Value *Amt = EmitMemory(gimple_call_arg(stmt, 0));
- EmitBuiltinUnaryOp(Amt, Result, Intrinsic::log);
- Result = CastToFPType(Result, ConvertType(gimple_call_return_type(stmt)));
- return true;
- }
- break;
- case BUILT_IN_LOG2:
- case BUILT_IN_LOG2F:
- case BUILT_IN_LOG2L:
- // If errno math has been disabled, expand these to llvm.log2 calls.
- if (!flag_errno_math) {
- Value *Amt = EmitMemory(gimple_call_arg(stmt, 0));
- EmitBuiltinUnaryOp(Amt, Result, Intrinsic::log2);
- Result = CastToFPType(Result, ConvertType(gimple_call_return_type(stmt)));
- return true;
- }
- break;
- case BUILT_IN_LOG10:
- case BUILT_IN_LOG10F:
- case BUILT_IN_LOG10L:
- // If errno math has been disabled, expand these to llvm.log10 calls.
- if (!flag_errno_math) {
- Value *Amt = EmitMemory(gimple_call_arg(stmt, 0));
- EmitBuiltinUnaryOp(Amt, Result, Intrinsic::log10);
- Result = CastToFPType(Result, ConvertType(gimple_call_return_type(stmt)));
- return true;
- }
- break;
- case BUILT_IN_EXP:
- case BUILT_IN_EXPF:
- case BUILT_IN_EXPL:
- // If errno math has been disabled, expand these to llvm.exp calls.
- if (!flag_errno_math) {
- Value *Amt = EmitMemory(gimple_call_arg(stmt, 0));
- EmitBuiltinUnaryOp(Amt, Result, Intrinsic::exp);
- Result = CastToFPType(Result, ConvertType(gimple_call_return_type(stmt)));
- return true;
- }
- break;
- case BUILT_IN_EXP2:
- case BUILT_IN_EXP2F:
- case BUILT_IN_EXP2L:
- // If errno math has been disabled, expand these to llvm.exp2 calls.
- if (!flag_errno_math) {
- Value *Amt = EmitMemory(gimple_call_arg(stmt, 0));
- EmitBuiltinUnaryOp(Amt, Result, Intrinsic::exp2);
- Result = CastToFPType(Result, ConvertType(gimple_call_return_type(stmt)));
- return true;
- }
- break;
- case BUILT_IN_FFS: // These GCC builtins always return int.
- case BUILT_IN_FFSL:
- case BUILT_IN_FFSLL: { // FFS(X) -> (x == 0 ? 0 : CTTZ(x)+1)
- // The argument and return type of cttz should match the argument type of
- // the ffs, but should ignore the return type of ffs.
- Value *Amt = EmitMemory(gimple_call_arg(stmt, 0));
- EmitBuiltinUnaryOp(Amt, Result, Intrinsic::cttz);
- Result = Builder.CreateAdd(Result,
- ConstantInt::get(Result->getType(), 1));
- Result = Builder.CreateIntCast(Result,
- ConvertType(gimple_call_return_type(stmt)),
- /*isSigned*/false);
- Value *Cond =
- Builder.CreateICmpEQ(Amt,
- Constant::getNullValue(Amt->getType()));
- Result = Builder.CreateSelect(Cond,
- Constant::getNullValue(Result->getType()),
- Result);
- return true;
- }
- case BUILT_IN_LCEIL:
- case BUILT_IN_LCEILF:
- case BUILT_IN_LCEILL:
- case BUILT_IN_LLCEIL:
- case BUILT_IN_LLCEILF:
- case BUILT_IN_LLCEILL:
- Result = EmitBuiltinLCEIL(stmt);
- return true;
- case BUILT_IN_LFLOOR:
- case BUILT_IN_LFLOORF:
- case BUILT_IN_LFLOORL:
- case BUILT_IN_LLFLOOR:
- case BUILT_IN_LLFLOORF:
- case BUILT_IN_LLFLOORL:
- Result = EmitBuiltinLFLOOR(stmt);
- return true;
-//TODO case BUILT_IN_FLT_ROUNDS: {
-//TODO Result =
-//TODO Builder.CreateCall(Intrinsic::getDeclaration(TheModule,
-//TODO Intrinsic::flt_rounds));
-//TODO Result = Builder.CreateBitCast(Result, ConvertType(gimple_call_return_type(stmt)));
-//TODO return true;
-//TODO }
- case BUILT_IN_TRAP:
- Builder.CreateCall(Intrinsic::getDeclaration(TheModule, Intrinsic::trap));
- // Emit an explicit unreachable instruction.
- Builder.CreateUnreachable();
- BeginBlock(BasicBlock::Create(Context));
- return true;
-
-//TODO // Convert annotation built-in to llvm.annotation intrinsic.
-//TODO case BUILT_IN_ANNOTATION: {
-//TODO
-//TODO // Get file and line number
-//TODO location_t locus = gimple_location(stmt);
-//TODO Constant *lineNo = ConstantInt::get(Type::getInt32Ty, LOCATION_LINE(locus));
-//TODO Constant *file = ConvertMetadataStringToGV(LOCATION_FILE(locus));
-//TODO const Type *SBP= Type::getInt8PtrTy(Context);
-//TODO file = TheFolder->CreateBitCast(file, SBP);
-//TODO
-//TODO // Get arguments.
-//TODO tree arglist = CALL_EXPR_ARGS(stmt);
-//TODO Value *ExprVal = EmitMemory(gimple_call_arg(stmt, 0));
-//TODO const Type *Ty = ExprVal->getType();
-//TODO Value *StrVal = EmitMemory(gimple_call_arg(stmt, 1));
-//TODO
-//TODO SmallVector<Value *, 4> Args;
-//TODO Args.push_back(ExprVal);
-//TODO Args.push_back(StrVal);
-//TODO Args.push_back(file);
-//TODO Args.push_back(lineNo);
-//TODO
-//TODO assert(Ty && "llvm.annotation arg type may not be null");
-//TODO Result = Builder.CreateCall(Intrinsic::getDeclaration(TheModule,
-//TODO Intrinsic::annotation,
-//TODO &Ty,
-//TODO 1),
-//TODO Args.begin(), Args.end());
-//TODO return true;
-//TODO }
-
- case BUILT_IN_SYNCHRONIZE: {
- // We assume like gcc appears to, that this only applies to cached memory.
- Value* C[5];
- C[0] = C[1] = C[2] = C[3] = ConstantInt::get(Type::getInt1Ty(Context), 1);
- C[4] = ConstantInt::get(Type::getInt1Ty(Context), 0);
-
- Builder.CreateCall(Intrinsic::getDeclaration(TheModule,
- Intrinsic::memory_barrier),
- C, C + 5);
- return true;
- }
-#if defined(TARGET_ALPHA) || defined(TARGET_386) || defined(TARGET_POWERPC) \
- || defined(TARGET_ARM)
- // gcc uses many names for the sync intrinsics
- // The type of the first argument is not reliable for choosing the
- // right llvm function; if the original type is not volatile, gcc has
- // helpfully changed it to "volatile void *" at this point. The
- // original type can be recovered from the function type in most cases.
- // For lock_release and bool_compare_and_swap even that is not good
- // enough, we have to key off the opcode.
- // Note that Intrinsic::getDeclaration expects the type list in reversed
- // order, while CreateCall expects the parameter list in normal order.
- case BUILT_IN_BOOL_COMPARE_AND_SWAP_1: {
- Result = BuildCmpAndSwapAtomicBuiltin(stmt, unsigned_char_type_node, true);
- return true;
- }
- case BUILT_IN_BOOL_COMPARE_AND_SWAP_2: {
- Result = BuildCmpAndSwapAtomicBuiltin(stmt, short_unsigned_type_node, true);
- return true;
- }
- case BUILT_IN_BOOL_COMPARE_AND_SWAP_4: {
- Result = BuildCmpAndSwapAtomicBuiltin(stmt, unsigned_type_node, true);
- return true;
- }
- case BUILT_IN_BOOL_COMPARE_AND_SWAP_8: {
-#if defined(TARGET_POWERPC)
- if (!TARGET_64BIT)
- return false;
-#endif
- Result = BuildCmpAndSwapAtomicBuiltin(stmt, long_long_unsigned_type_node,
- true);
- return true;
- }
-
- case BUILT_IN_VAL_COMPARE_AND_SWAP_8:
-#if defined(TARGET_POWERPC)
- if (!TARGET_64BIT)
- return false;
-#endif
- case BUILT_IN_VAL_COMPARE_AND_SWAP_1:
- case BUILT_IN_VAL_COMPARE_AND_SWAP_2:
- case BUILT_IN_VAL_COMPARE_AND_SWAP_4: {
- tree type = gimple_call_return_type(stmt);
- Result = BuildCmpAndSwapAtomicBuiltin(stmt, type, false);
- return true;
- }
- case BUILT_IN_FETCH_AND_ADD_8:
-#if defined(TARGET_POWERPC)
- if (!TARGET_64BIT)
- return false;
-#endif
- case BUILT_IN_FETCH_AND_ADD_1:
- case BUILT_IN_FETCH_AND_ADD_2:
- case BUILT_IN_FETCH_AND_ADD_4: {
- Result = BuildBinaryAtomicBuiltin(stmt, Intrinsic::atomic_load_add);
- return true;
- }
- case BUILT_IN_FETCH_AND_SUB_8:
-#if defined(TARGET_POWERPC)
- if (!TARGET_64BIT)
- return false;
-#endif
- case BUILT_IN_FETCH_AND_SUB_1:
- case BUILT_IN_FETCH_AND_SUB_2:
- case BUILT_IN_FETCH_AND_SUB_4: {
- Result = BuildBinaryAtomicBuiltin(stmt, Intrinsic::atomic_load_sub);
- return true;
- }
- case BUILT_IN_FETCH_AND_OR_8:
-#if defined(TARGET_POWERPC)
- if (!TARGET_64BIT)
- return false;
-#endif
- case BUILT_IN_FETCH_AND_OR_1:
- case BUILT_IN_FETCH_AND_OR_2:
- case BUILT_IN_FETCH_AND_OR_4: {
- Result = BuildBinaryAtomicBuiltin(stmt, Intrinsic::atomic_load_or);
- return true;
- }
- case BUILT_IN_FETCH_AND_AND_8:
-#if defined(TARGET_POWERPC)
- if (!TARGET_64BIT)
- return false;
-#endif
- case BUILT_IN_FETCH_AND_AND_1:
- case BUILT_IN_FETCH_AND_AND_2:
- case BUILT_IN_FETCH_AND_AND_4: {
- Result = BuildBinaryAtomicBuiltin(stmt, Intrinsic::atomic_load_and);
- return true;
- }
- case BUILT_IN_FETCH_AND_XOR_8:
-#if defined(TARGET_POWERPC)
- if (!TARGET_64BIT)
- return false;
-#endif
- case BUILT_IN_FETCH_AND_XOR_1:
- case BUILT_IN_FETCH_AND_XOR_2:
- case BUILT_IN_FETCH_AND_XOR_4: {
- Result = BuildBinaryAtomicBuiltin(stmt, Intrinsic::atomic_load_xor);
- return true;
- }
- case BUILT_IN_FETCH_AND_NAND_8:
-#if defined(TARGET_POWERPC)
- if (!TARGET_64BIT)
- return false;
-#endif
- case BUILT_IN_FETCH_AND_NAND_1:
- case BUILT_IN_FETCH_AND_NAND_2:
- case BUILT_IN_FETCH_AND_NAND_4: {
- Result = BuildBinaryAtomicBuiltin(stmt, Intrinsic::atomic_load_nand);
- return true;
- }
- case BUILT_IN_LOCK_TEST_AND_SET_8:
-#if defined(TARGET_POWERPC)
- if (!TARGET_64BIT)
- return false;
-#endif
- case BUILT_IN_LOCK_TEST_AND_SET_1:
- case BUILT_IN_LOCK_TEST_AND_SET_2:
- case BUILT_IN_LOCK_TEST_AND_SET_4: {
- Result = BuildBinaryAtomicBuiltin(stmt, Intrinsic::atomic_swap);
- return true;
- }
-
- case BUILT_IN_ADD_AND_FETCH_8:
-#if defined(TARGET_POWERPC)
- if (!TARGET_64BIT)
- return false;
-#endif
- case BUILT_IN_ADD_AND_FETCH_1:
- case BUILT_IN_ADD_AND_FETCH_2:
- case BUILT_IN_ADD_AND_FETCH_4: {
- tree return_type = gimple_call_return_type(stmt);
- const Type *ResultTy = ConvertType(return_type);
- Value* C[2] = {
- EmitMemory(gimple_call_arg(stmt, 0)),
- EmitMemory(gimple_call_arg(stmt, 1))
- };
- const Type* Ty[2];
- Ty[0] = ResultTy;
- Ty[1] = ResultTy->getPointerTo();
- C[0] = Builder.CreateBitCast(C[0], Ty[1]);
- C[1] = Builder.CreateIntCast(C[1], Ty[0],
- /*isSigned*/!TYPE_UNSIGNED(return_type),
- "cast");
-
- // The gcc builtins are also full memory barriers.
- // FIXME: __sync_lock_test_and_set and __sync_lock_release require less.
-#if defined(TARGET_ARM) && defined(CONFIG_DARWIN_H)
- EmitMemoryBarrier(true, true, true, true, false);
-#else
- EmitMemoryBarrier(true, true, true, true, true);
-#endif
-
- Result =
- Builder.CreateCall(Intrinsic::getDeclaration(TheModule,
- Intrinsic::atomic_load_add,
- Ty, 2),
- C, C + 2);
-
- // The gcc builtins are also full memory barriers.
- // FIXME: __sync_lock_test_and_set and __sync_lock_release require less.
-#if defined(TARGET_ARM) && defined(CONFIG_DARWIN_H)
- EmitMemoryBarrier(true, true, true, true, false);
-#else
- EmitMemoryBarrier(true, true, true, true, true);
-#endif
-
- Result = Builder.CreateAdd(Result, C[1]);
- Result = Builder.CreateIntToPtr(Result, ResultTy);
- return true;
- }
- case BUILT_IN_SUB_AND_FETCH_8:
-#if defined(TARGET_POWERPC)
- if (!TARGET_64BIT)
- return false;
-#endif
- case BUILT_IN_SUB_AND_FETCH_1:
- case BUILT_IN_SUB_AND_FETCH_2:
- case BUILT_IN_SUB_AND_FETCH_4: {
- tree return_type = gimple_call_return_type(stmt);
- const Type *ResultTy = ConvertType(return_type);
- Value* C[2] = {
- EmitMemory(gimple_call_arg(stmt, 0)),
- EmitMemory(gimple_call_arg(stmt, 1))
- };
- const Type* Ty[2];
- Ty[0] = ResultTy;
- Ty[1] = ResultTy->getPointerTo();
- C[0] = Builder.CreateBitCast(C[0], Ty[1]);
- C[1] = Builder.CreateIntCast(C[1], Ty[0],
- /*isSigned*/!TYPE_UNSIGNED(return_type),
- "cast");
-
- // The gcc builtins are also full memory barriers.
- // FIXME: __sync_lock_test_and_set and __sync_lock_release require less.
-#if defined(TARGET_ARM) && defined(CONFIG_DARWIN_H)
- EmitMemoryBarrier(true, true, true, true, false);
-#else
- EmitMemoryBarrier(true, true, true, true, true);
-#endif
-
- Result =
- Builder.CreateCall(Intrinsic::getDeclaration(TheModule,
- Intrinsic::atomic_load_sub,
- Ty, 2),
- C, C + 2);
-
- // The gcc builtins are also full memory barriers.
- // FIXME: __sync_lock_test_and_set and __sync_lock_release require less.
-#if defined(TARGET_ARM) && defined(CONFIG_DARWIN_H)
- EmitMemoryBarrier(true, true, true, true, false);
-#else
- EmitMemoryBarrier(true, true, true, true, true);
-#endif
-
- Result = Builder.CreateSub(Result, C[1]);
- Result = Builder.CreateIntToPtr(Result, ResultTy);
- return true;
- }
- case BUILT_IN_OR_AND_FETCH_8:
-#if defined(TARGET_POWERPC)
- if (!TARGET_64BIT)
- return false;
-#endif
- case BUILT_IN_OR_AND_FETCH_1:
- case BUILT_IN_OR_AND_FETCH_2:
- case BUILT_IN_OR_AND_FETCH_4: {
- tree return_type = gimple_call_return_type(stmt);
- const Type *ResultTy = ConvertType(return_type);
- Value* C[2] = {
- EmitMemory(gimple_call_arg(stmt, 0)),
- EmitMemory(gimple_call_arg(stmt, 1))
- };
- const Type* Ty[2];
- Ty[0] = ResultTy;
- Ty[1] = ResultTy->getPointerTo();
- C[0] = Builder.CreateBitCast(C[0], Ty[1]);
- C[1] = Builder.CreateIntCast(C[1], Ty[0],
- /*isSigned*/!TYPE_UNSIGNED(return_type),
- "cast");
-
- // The gcc builtins are also full memory barriers.
- // FIXME: __sync_lock_test_and_set and __sync_lock_release require less.
-#if defined(TARGET_ARM) && defined(CONFIG_DARWIN_H)
- EmitMemoryBarrier(true, true, true, true, false);
-#else
- EmitMemoryBarrier(true, true, true, true, true);
-#endif
-
- Result =
- Builder.CreateCall(Intrinsic::getDeclaration(TheModule,
- Intrinsic::atomic_load_or,
- Ty, 2),
- C, C + 2);
-
- // The gcc builtins are also full memory barriers.
- // FIXME: __sync_lock_test_and_set and __sync_lock_release require less.
-#if defined(TARGET_ARM) && defined(CONFIG_DARWIN_H)
- EmitMemoryBarrier(true, true, true, true, false);
-#else
- EmitMemoryBarrier(true, true, true, true, true);
-#endif
-
- Result = Builder.CreateOr(Result, C[1]);
- Result = Builder.CreateIntToPtr(Result, ResultTy);
- return true;
- }
- case BUILT_IN_AND_AND_FETCH_8:
-#if defined(TARGET_POWERPC)
- if (!TARGET_64BIT)
- return false;
-#endif
- case BUILT_IN_AND_AND_FETCH_1:
- case BUILT_IN_AND_AND_FETCH_2:
- case BUILT_IN_AND_AND_FETCH_4: {
- tree return_type = gimple_call_return_type(stmt);
- const Type *ResultTy = ConvertType(return_type);
- Value* C[2] = {
- EmitMemory(gimple_call_arg(stmt, 0)),
- EmitMemory(gimple_call_arg(stmt, 1))
- };
- const Type* Ty[2];
- Ty[0] = ResultTy;
- Ty[1] = ResultTy->getPointerTo();
- C[0] = Builder.CreateBitCast(C[0], Ty[1]);
- C[1] = Builder.CreateIntCast(C[1], Ty[0],
- /*isSigned*/!TYPE_UNSIGNED(return_type),
- "cast");
-
- // The gcc builtins are also full memory barriers.
- // FIXME: __sync_lock_test_and_set and __sync_lock_release require less.
-#if defined(TARGET_ARM) && defined(CONFIG_DARWIN_H)
- EmitMemoryBarrier(true, true, true, true, false);
-#else
- EmitMemoryBarrier(true, true, true, true, true);
-#endif
-
- Result =
- Builder.CreateCall(Intrinsic::getDeclaration(TheModule,
- Intrinsic::atomic_load_and,
- Ty, 2),
- C, C + 2);
-
- // The gcc builtins are also full memory barriers.
- // FIXME: __sync_lock_test_and_set and __sync_lock_release require less.
-#if defined(TARGET_ARM) && defined(CONFIG_DARWIN_H)
- EmitMemoryBarrier(true, true, true, true, false);
-#else
- EmitMemoryBarrier(true, true, true, true, true);
-#endif
-
- Result = Builder.CreateAnd(Result, C[1]);
- Result = Builder.CreateIntToPtr(Result, ResultTy);
- return true;
- }
- case BUILT_IN_XOR_AND_FETCH_8:
-#if defined(TARGET_POWERPC)
- if (!TARGET_64BIT)
- return false;
-#endif
- case BUILT_IN_XOR_AND_FETCH_1:
- case BUILT_IN_XOR_AND_FETCH_2:
- case BUILT_IN_XOR_AND_FETCH_4: {
- tree return_type = gimple_call_return_type(stmt);
- const Type *ResultTy = ConvertType(return_type);
- Value* C[2] = {
- EmitMemory(gimple_call_arg(stmt, 0)),
- EmitMemory(gimple_call_arg(stmt, 1))
- };
- const Type* Ty[2];
- Ty[0] = ResultTy;
- Ty[1] = ResultTy->getPointerTo();
- C[0] = Builder.CreateBitCast(C[0], Ty[1]);
- C[1] = Builder.CreateIntCast(C[1], Ty[0],
- /*isSigned*/!TYPE_UNSIGNED(return_type),
- "cast");
-
- // The gcc builtins are also full memory barriers.
- // FIXME: __sync_lock_test_and_set and __sync_lock_release require less.
-#if defined(TARGET_ARM) && defined(CONFIG_DARWIN_H)
- EmitMemoryBarrier(true, true, true, true, false);
-#else
- EmitMemoryBarrier(true, true, true, true, true);
-#endif
-
- Result =
- Builder.CreateCall(Intrinsic::getDeclaration(TheModule,
- Intrinsic::atomic_load_xor,
- Ty, 2),
- C, C + 2);
-
- // The gcc builtins are also full memory barriers.
- // FIXME: __sync_lock_test_and_set and __sync_lock_release require less.
-#if defined(TARGET_ARM) && defined(CONFIG_DARWIN_H)
- EmitMemoryBarrier(true, true, true, true, false);
-#else
- EmitMemoryBarrier(true, true, true, true, true);
-#endif
-
- Result = Builder.CreateXor(Result, C[1]);
- Result = Builder.CreateIntToPtr(Result, ResultTy);
- return true;
- }
- case BUILT_IN_NAND_AND_FETCH_8:
-#if defined(TARGET_POWERPC)
- if (!TARGET_64BIT)
- return false;
-#endif
- case BUILT_IN_NAND_AND_FETCH_1:
- case BUILT_IN_NAND_AND_FETCH_2:
- case BUILT_IN_NAND_AND_FETCH_4: {
- tree return_type = gimple_call_return_type(stmt);
- const Type *ResultTy = ConvertType(return_type);
- Value* C[2] = {
- EmitMemory(gimple_call_arg(stmt, 0)),
- EmitMemory(gimple_call_arg(stmt, 1))
- };
- const Type* Ty[2];
- Ty[0] = ResultTy;
- Ty[1] = ResultTy->getPointerTo();
- C[0] = Builder.CreateBitCast(C[0], Ty[1]);
- C[1] = Builder.CreateIntCast(C[1], Ty[0],
- /*isSigned*/!TYPE_UNSIGNED(return_type),
- "cast");
-
- // The gcc builtins are also full memory barriers.
- // FIXME: __sync_lock_test_and_set and __sync_lock_release require less.
-#if defined(TARGET_ARM) && defined(CONFIG_DARWIN_H)
- EmitMemoryBarrier(true, true, true, true, false);
-#else
- EmitMemoryBarrier(true, true, true, true, true);
-#endif
-
- Result =
- Builder.CreateCall(Intrinsic::getDeclaration(TheModule,
- Intrinsic::atomic_load_nand,
- Ty, 2),
- C, C + 2);
-
- // The gcc builtins are also full memory barriers.
- // FIXME: __sync_lock_test_and_set and __sync_lock_release require less.
-#if defined(TARGET_ARM) && defined(CONFIG_DARWIN_H)
- EmitMemoryBarrier(true, true, true, true, false);
-#else
- EmitMemoryBarrier(true, true, true, true, true);
-#endif
-
- Result = Builder.CreateAnd(Builder.CreateNot(Result), C[1]);
- Result = Builder.CreateIntToPtr(Result, ResultTy);
- return true;
- }
-
- case BUILT_IN_LOCK_RELEASE_1:
- case BUILT_IN_LOCK_RELEASE_2:
- case BUILT_IN_LOCK_RELEASE_4:
- case BUILT_IN_LOCK_RELEASE_8:
- case BUILT_IN_LOCK_RELEASE_16: {
- // This is effectively a volatile store of 0, and has no return value.
- // The argument has typically been coerced to "volatile void*"; the
- // only way to find the size of the operation is from the builtin
- // opcode.
- const Type *Ty;
- switch(DECL_FUNCTION_CODE(fndecl)) {
- case BUILT_IN_LOCK_RELEASE_16: // not handled; should use SSE on x86
- default:
- abort();
- case BUILT_IN_LOCK_RELEASE_1:
- Ty = Type::getInt8Ty(Context); break;
- case BUILT_IN_LOCK_RELEASE_2:
- Ty = Type::getInt16Ty(Context); break;
- case BUILT_IN_LOCK_RELEASE_4:
- Ty = Type::getInt32Ty(Context); break;
- case BUILT_IN_LOCK_RELEASE_8:
- Ty = Type::getInt64Ty(Context); break;
- }
- Value *Ptr = EmitMemory(gimple_call_arg(stmt, 0));
- Ptr = Builder.CreateBitCast(Ptr, Ty->getPointerTo());
- Builder.CreateStore(Constant::getNullValue(Ty), Ptr, true);
- Result = 0;
- return true;
- }
-
-#endif //FIXME: these break the build for backends that haven't implemented them
-
-
-#if 1 // FIXME: Should handle these GCC extensions eventually.
- case BUILT_IN_LONGJMP: {
- if (validate_gimple_arglist(stmt, POINTER_TYPE, INTEGER_TYPE, VOID_TYPE)) {
- tree value = gimple_call_arg(stmt, 1);
-
- if (TREE_CODE(value) != INTEGER_CST ||
- cast<ConstantInt>(EmitMemory(value))->getValue() != 1) {
- error ("%<__builtin_longjmp%> second argument must be 1");
- return false;
- }
- }
-#if defined(TARGET_ARM) && defined(CONFIG_DARWIN_H)
- Value *Buf = Emit(TREE_VALUE(arglist), 0);
- Buf = Builder.CreateBitCast(Buf, Type::getInt8Ty(Context)->getPointerTo());
- Builder.CreateCall(Intrinsic::getDeclaration(TheModule,
- Intrinsic::eh_sjlj_longjmp),
- Buf);
- Result = 0;
- return true;
-#endif
- // Fall-through
- }
- case BUILT_IN_APPLY_ARGS:
- case BUILT_IN_APPLY:
- case BUILT_IN_RETURN:
- case BUILT_IN_SAVEREGS:
- case BUILT_IN_ARGS_INFO:
- case BUILT_IN_NEXT_ARG:
- case BUILT_IN_CLASSIFY_TYPE:
- case BUILT_IN_AGGREGATE_INCOMING_ADDRESS:
- case BUILT_IN_SETJMP_SETUP:
- case BUILT_IN_SETJMP_DISPATCHER:
- case BUILT_IN_SETJMP_RECEIVER:
- case BUILT_IN_UPDATE_SETJMP_BUF:
-
- // FIXME: HACK: Just ignore these.
- {
- const Type *Ty = ConvertType(gimple_call_return_type(stmt));
- if (!Ty->isVoidTy())
- Result = Constant::getNullValue(Ty);
- return true;
- }
-#endif // FIXME: Should handle these GCC extensions eventually.
- }
- return false;
-}
-
-bool TreeToLLVM::EmitBuiltinUnaryOp(Value *InVal, Value *&Result,
- Intrinsic::ID Id) {
- // The intrinsic might be overloaded in which case the argument is of
- // varying type. Make sure that we specify the actual type for "iAny"
- // by passing it as the 3rd and 4th parameters. This isn't needed for
- // most intrinsics, but is needed for ctpop, cttz, ctlz.
- const Type *Ty = InVal->getType();
- Result = Builder.CreateCall(Intrinsic::getDeclaration(TheModule, Id, &Ty, 1),
- InVal);
- return true;
-}
-
-Value *TreeToLLVM::EmitBuiltinSQRT(gimple stmt) {
- Value *Amt = EmitMemory(gimple_call_arg(stmt, 0));
- const Type* Ty = Amt->getType();
-
- return Builder.CreateCall(Intrinsic::getDeclaration(TheModule,
- Intrinsic::sqrt, &Ty, 1),
- Amt);
-}
-
-Value *TreeToLLVM::EmitBuiltinPOWI(gimple stmt) {
- if (!validate_gimple_arglist(stmt, REAL_TYPE, INTEGER_TYPE, VOID_TYPE))
- return 0;
-
- Value *Val = EmitMemory(gimple_call_arg(stmt, 0));
- Value *Pow = EmitMemory(gimple_call_arg(stmt, 1));
- const Type *Ty = Val->getType();
- Pow = Builder.CreateIntCast(Pow, Type::getInt32Ty(Context), /*isSigned*/true);
-
- SmallVector<Value *,2> Args;
- Args.push_back(Val);
- Args.push_back(Pow);
- return Builder.CreateCall(Intrinsic::getDeclaration(TheModule,
- Intrinsic::powi, &Ty, 1),
- Args.begin(), Args.end());
-}
-
-Value *TreeToLLVM::EmitBuiltinPOW(gimple stmt) {
- if (!validate_gimple_arglist(stmt, REAL_TYPE, REAL_TYPE, VOID_TYPE))
- return 0;
-
- Value *Val = EmitMemory(gimple_call_arg(stmt, 0));
- Value *Pow = EmitMemory(gimple_call_arg(stmt, 1));
- const Type *Ty = Val->getType();
-
- SmallVector<Value *,2> Args;
- Args.push_back(Val);
- Args.push_back(Pow);
- return Builder.CreateCall(Intrinsic::getDeclaration(TheModule,
- Intrinsic::pow, &Ty, 1),
- Args.begin(), Args.end());
-}
-
-Value *TreeToLLVM::EmitBuiltinLCEIL(gimple stmt) {
- if (!validate_gimple_arglist(stmt, REAL_TYPE, VOID_TYPE))
- return 0;
-
- // Cast the result of "ceil" to the appropriate integer type.
- // First call the appropriate version of "ceil".
- tree op = gimple_call_arg(stmt, 0);
- StringRef Name = SelectFPName(TREE_TYPE(op), "ceilf", "ceil", "ceill");
- CallInst *Call = EmitSimpleCall(Name, TREE_TYPE(op), op, NULL);
- Call->setDoesNotThrow();
- Call->setDoesNotAccessMemory();
-
- // Then type cast the result of the "ceil" call.
- tree type = gimple_call_return_type(stmt);
- const Type *RetTy = GetRegType(type);
- return TYPE_UNSIGNED(type) ? Builder.CreateFPToUI(Call, RetTy) :
- Builder.CreateFPToSI(Call, RetTy);
-}
-
-Value *TreeToLLVM::EmitBuiltinLFLOOR(gimple stmt) {
- if (!validate_gimple_arglist(stmt, REAL_TYPE, VOID_TYPE))
- return 0;
-
- // Cast the result of "floor" to the appropriate integer type.
- // First call the appropriate version of "floor".
- tree op = gimple_call_arg(stmt, 0);
- StringRef Name = SelectFPName(TREE_TYPE(op), "floorf", "floor", "floorl");
- CallInst *Call = EmitSimpleCall(Name, TREE_TYPE(op), op, NULL);
- Call->setDoesNotThrow();
- Call->setDoesNotAccessMemory();
-
- // Then type cast the result of the "floor" call.
- tree type = gimple_call_return_type(stmt);
- const Type *RetTy = GetRegType(type);
- return TYPE_UNSIGNED(type) ? Builder.CreateFPToUI(Call, RetTy) :
- Builder.CreateFPToSI(Call, RetTy);
-}
-
-bool TreeToLLVM::EmitBuiltinConstantP(gimple stmt, Value *&Result) {
- Result = Constant::getNullValue(ConvertType(gimple_call_return_type(stmt)));
- return true;
-}
-
-bool TreeToLLVM::EmitBuiltinExtendPointer(gimple stmt, Value *&Result) {
- tree arg0 = gimple_call_arg(stmt, 0);
- Value *Amt = EmitMemory(arg0);
- bool AmtIsSigned = !TYPE_UNSIGNED(TREE_TYPE(arg0));
- bool ExpIsSigned = !TYPE_UNSIGNED(gimple_call_return_type(stmt));
- Result = CastToAnyType(Amt, AmtIsSigned,
- ConvertType(gimple_call_return_type(stmt)),
- ExpIsSigned);
- return true;
-}
-
-/// OptimizeIntoPlainBuiltIn - Return true if it's safe to lower the object
-/// size checking builtin calls (e.g. __builtin___memcpy_chk into the
-/// plain non-checking calls. If the size of the argument is either -1 (unknown)
-/// or large enough to ensure no overflow (> len), then it's safe to do so.
-static bool OptimizeIntoPlainBuiltIn(gimple stmt, Value *Len, Value *Size) {
- if (BitCastInst *SizeBC = dyn_cast<BitCastInst>(Size))
- Size = SizeBC->getOperand(0);
- ConstantInt *SizeCI = dyn_cast<ConstantInt>(Size);
- if (!SizeCI)
- return false;
- if (SizeCI->isAllOnesValue())
- // If size is -1, convert to plain memcpy, etc.
- return true;
-
- if (BitCastInst *LenBC = dyn_cast<BitCastInst>(Len))
- Len = LenBC->getOperand(0);
- ConstantInt *LenCI = dyn_cast<ConstantInt>(Len);
- if (!LenCI)
- return false;
- if (SizeCI->getValue().ult(LenCI->getValue())) {
- warning_at (gimple_location(stmt), 0,
- "call to %D will always overflow destination buffer",
- gimple_call_fndecl(stmt));
- return false;
- }
- return true;
-}
-
-/// EmitBuiltinMemCopy - Emit an llvm.memcpy or llvm.memmove intrinsic,
-/// depending on the value of isMemMove.
-bool TreeToLLVM::EmitBuiltinMemCopy(gimple stmt, Value *&Result, bool isMemMove,
- bool SizeCheck) {
- if (SizeCheck) {
- if (!validate_gimple_arglist(stmt, POINTER_TYPE, POINTER_TYPE,
- INTEGER_TYPE, INTEGER_TYPE, VOID_TYPE))
- return false;
- } else {
- if (!validate_gimple_arglist(stmt, POINTER_TYPE, POINTER_TYPE,
- INTEGER_TYPE, VOID_TYPE))
- return false;
- }
-
- tree Dst = gimple_call_arg(stmt, 0);
- tree Src = gimple_call_arg(stmt, 1);
- unsigned SrcAlign = getPointerAlignment(Src);
- unsigned DstAlign = getPointerAlignment(Dst);
-
- Value *DstV = EmitMemory(Dst);
- Value *SrcV = EmitMemory(Src);
- Value *Len = EmitMemory(gimple_call_arg(stmt, 2));
- if (SizeCheck) {
- tree SizeArg = gimple_call_arg(stmt, 3);
- Value *Size = EmitMemory(SizeArg);
- if (!OptimizeIntoPlainBuiltIn(stmt, Len, Size))
- return false;
- }
-
- Result = isMemMove ?
- EmitMemMove(DstV, SrcV, Len, std::min(SrcAlign, DstAlign)) :
- EmitMemCpy(DstV, SrcV, Len, std::min(SrcAlign, DstAlign));
- return true;
-}
-
-bool TreeToLLVM::EmitBuiltinMemSet(gimple stmt, Value *&Result, bool SizeCheck){
- if (SizeCheck) {
- if (!validate_gimple_arglist(stmt, POINTER_TYPE, INTEGER_TYPE,
- INTEGER_TYPE, INTEGER_TYPE, VOID_TYPE))
- return false;
- } else {
- if (!validate_gimple_arglist(stmt, POINTER_TYPE, INTEGER_TYPE,
- INTEGER_TYPE, VOID_TYPE))
- return false;
- }
-
- tree Dst = gimple_call_arg(stmt, 0);
- unsigned DstAlign = getPointerAlignment(Dst);
-
- Value *DstV = EmitMemory(Dst);
- Value *Val = EmitMemory(gimple_call_arg(stmt, 1));
- Value *Len = EmitMemory(gimple_call_arg(stmt, 2));
- if (SizeCheck) {
- tree SizeArg = gimple_call_arg(stmt, 3);
- Value *Size = EmitMemory(SizeArg);
- if (!OptimizeIntoPlainBuiltIn(stmt, Len, Size))
- return false;
- }
- Result = EmitMemSet(DstV, Val, Len, DstAlign);
- return true;
-}
-
-bool TreeToLLVM::EmitBuiltinBZero(gimple stmt, Value *&/*Result*/) {
- if (!validate_gimple_arglist(stmt, POINTER_TYPE, INTEGER_TYPE, VOID_TYPE))
- return false;
-
- tree Dst = gimple_call_arg(stmt, 0);
- unsigned DstAlign = getPointerAlignment(Dst);
-
- Value *DstV = EmitMemory(Dst);
- Value *Val = Constant::getNullValue(Type::getInt32Ty(Context));
- Value *Len = EmitMemory(gimple_call_arg(stmt, 1));
- EmitMemSet(DstV, Val, Len, DstAlign);
- return true;
-}
-
-bool TreeToLLVM::EmitBuiltinPrefetch(gimple stmt) {
- if (!validate_gimple_arglist(stmt, POINTER_TYPE, 0))
- return false;
-
- Value *Ptr = EmitMemory(gimple_call_arg(stmt, 0));
- Value *ReadWrite = 0;
- Value *Locality = 0;
-
- if (gimple_call_num_args(stmt) > 1) { // Args 1/2 are optional
- ReadWrite = EmitMemory(gimple_call_arg(stmt, 1));
- if (!isa<ConstantInt>(ReadWrite)) {
- error("second argument to %<__builtin_prefetch%> must be a constant");
- ReadWrite = 0;
- } else if (cast<ConstantInt>(ReadWrite)->getZExtValue() > 1) {
- warning (0, "invalid second argument to %<__builtin_prefetch%>;"
- " using zero");
- ReadWrite = 0;
- } else {
- ReadWrite = TheFolder->CreateIntCast(cast<Constant>(ReadWrite),
- Type::getInt32Ty(Context),
- /*isSigned*/false);
- }
-
- if (gimple_call_num_args(stmt) > 2) {
- Locality = EmitMemory(gimple_call_arg(stmt, 2));
- if (!isa<ConstantInt>(Locality)) {
- error("third argument to %<__builtin_prefetch%> must be a constant");
- Locality = 0;
- } else if (cast<ConstantInt>(Locality)->getZExtValue() > 3) {
- warning(0, "invalid third argument to %<__builtin_prefetch%>; using 3");
- Locality = 0;
- } else {
- Locality = TheFolder->CreateIntCast(cast<Constant>(Locality),
- Type::getInt32Ty(Context),
- /*isSigned*/false);
- }
- }
- }
-
- // Default to highly local read.
- if (ReadWrite == 0)
- ReadWrite = Constant::getNullValue(Type::getInt32Ty(Context));
- if (Locality == 0)
- Locality = ConstantInt::get(Type::getInt32Ty(Context), 3);
-
- Ptr = Builder.CreateBitCast(Ptr, Type::getInt8PtrTy(Context));
-
- Value *Ops[3] = { Ptr, ReadWrite, Locality };
- Builder.CreateCall(Intrinsic::getDeclaration(TheModule, Intrinsic::prefetch),
- Ops, Ops+3);
- return true;
-}
-
-/// EmitBuiltinReturnAddr - Emit an llvm.returnaddress or llvm.frameaddress
-/// instruction, depending on whether isFrame is true or not.
-bool TreeToLLVM::EmitBuiltinReturnAddr(gimple stmt, Value *&Result,
- bool isFrame) {
- if (!validate_gimple_arglist(stmt, INTEGER_TYPE, VOID_TYPE))
- return false;
-
- ConstantInt *Level =
- dyn_cast<ConstantInt>(EmitMemory(gimple_call_arg(stmt, 0)));
- if (!Level) {
- if (isFrame)
- error("invalid argument to %<__builtin_frame_address%>");
- else
- error("invalid argument to %<__builtin_return_address%>");
- return false;
- }
-
- Intrinsic::ID IID =
- !isFrame ? Intrinsic::returnaddress : Intrinsic::frameaddress;
- Result = Builder.CreateCall(Intrinsic::getDeclaration(TheModule, IID), Level);
- Result = Builder.CreateBitCast(Result,
- ConvertType(gimple_call_return_type(stmt)));
- return true;
-}
-
-bool TreeToLLVM::EmitBuiltinExtractReturnAddr(gimple stmt, Value *&Result) {
- Value *Ptr = EmitMemory(gimple_call_arg(stmt, 0));
-
- // FIXME: Actually we should do something like this:
- //
- // Result = (Ptr & MASK_RETURN_ADDR) + RETURN_ADDR_OFFSET, if mask and
- // offset are defined. This seems to be needed for: ARM, MIPS, Sparc.
- // Unfortunately, these constants are defined as RTL expressions and
- // should be handled separately.
-
- Result = Builder.CreateBitCast(Ptr, Type::getInt8PtrTy(Context));
-
- return true;
-}
-
-bool TreeToLLVM::EmitBuiltinFrobReturnAddr(gimple stmt, Value *&Result) {
- Value *Ptr = EmitMemory(gimple_call_arg(stmt, 0));
-
- // FIXME: Actually we should do something like this:
- //
- // Result = Ptr - RETURN_ADDR_OFFSET, if offset is defined. This seems to be
- // needed for: MIPS, Sparc. Unfortunately, these constants are defined
- // as RTL expressions and should be handled separately.
-
- Result = Builder.CreateBitCast(Ptr, Type::getInt8PtrTy(Context));
-
- return true;
-}
-
-bool TreeToLLVM::EmitBuiltinStackSave(gimple stmt, Value *&Result) {
- if (!validate_gimple_arglist(stmt, VOID_TYPE))
- return false;
-
- Result = Builder.CreateCall(Intrinsic::getDeclaration(TheModule,
- Intrinsic::stacksave));
- return true;
-}
-
-
-// Exception handling builtins.
-
-bool TreeToLLVM::EmitBuiltinEHPointer(gimple stmt, Value *&Result) {
- // Lookup the local that holds the exception pointer for this region.
- unsigned RegionNo = tree_low_cst(gimple_call_arg(stmt, 0), 0);
- AllocaInst *ExcPtr = getExceptionPtr(RegionNo);
- // Load the exception pointer out.
- Result = Builder.CreateLoad(ExcPtr);
- // Ensure the returned value has the right pointer type.
- tree type = gimple_call_return_type(stmt);
- Result = Builder.CreateBitCast(Result, ConvertType(type));
- return true;
-}
-
-
-// Builtins used by the exception handling runtime.
-
-// On most machines, the CFA coincides with the first incoming parm.
-#ifndef ARG_POINTER_CFA_OFFSET
-#define ARG_POINTER_CFA_OFFSET(FNDECL) FIRST_PARM_OFFSET (FNDECL)
-#endif
-
-// The mapping from gcc register number to DWARF 2 CFA column number. By
-// default, we just provide columns for all registers.
-#ifndef DWARF_FRAME_REGNUM
-#define DWARF_FRAME_REGNUM(REG) DBX_REGISTER_NUMBER (REG)
-#endif
-
-// Map register numbers held in the call frame info that gcc has
-// collected using DWARF_FRAME_REGNUM to those that should be output in
-// .debug_frame and .eh_frame.
-#ifndef DWARF2_FRAME_REG_OUT
-#define DWARF2_FRAME_REG_OUT(REGNO, FOR_EH) (REGNO)
-#endif
-
-/* Registers that get partially clobbered by a call in a given mode.
- These must not be call used registers. */
-#ifndef HARD_REGNO_CALL_PART_CLOBBERED
-#define HARD_REGNO_CALL_PART_CLOBBERED(REGNO, MODE) 0
-#endif
-
-bool TreeToLLVM::EmitBuiltinDwarfCFA(gimple stmt, Value *&Result) {
- if (!validate_gimple_arglist(stmt, VOID_TYPE))
- return false;
-
- int cfa_offset = ARG_POINTER_CFA_OFFSET(exp);
-
- // FIXME: is i32 always enough here?
- Result =
- Builder.CreateCall(Intrinsic::getDeclaration(TheModule,
- Intrinsic::eh_dwarf_cfa),
- ConstantInt::get(Type::getInt32Ty(Context), cfa_offset));
-
- return true;
-}
-
-bool TreeToLLVM::EmitBuiltinDwarfSPColumn(gimple stmt, Value *&Result) {
- if (!validate_gimple_arglist(stmt, VOID_TYPE))
- return false;
-
- unsigned int dwarf_regnum = DWARF_FRAME_REGNUM(STACK_POINTER_REGNUM);
- Result = ConstantInt::get(ConvertType(gimple_call_return_type(stmt)),
- dwarf_regnum);
-
- return true;
-}
-
-bool TreeToLLVM::EmitBuiltinEHReturnDataRegno(gimple stmt, Value *&Result) {
-#ifdef EH_RETURN_DATA_REGNO
- if (!validate_gimple_arglist(stmt, INTEGER_TYPE, VOID_TYPE))
- return false;
-
- tree which = gimple_call_arg(stmt, 0);
- unsigned HOST_WIDE_INT iwhich;
-
- if (TREE_CODE (which) != INTEGER_CST) {
- error ("argument of %<__builtin_eh_return_regno%> must be constant");
- return false;
- }
-
- iwhich = tree_low_cst (which, 1);
- iwhich = EH_RETURN_DATA_REGNO (iwhich);
- if (iwhich == INVALID_REGNUM)
- return false;
-
- iwhich = DWARF_FRAME_REGNUM (iwhich);
-
- Result = ConstantInt::get(ConvertType(gimple_call_return_type(stmt)), iwhich);
-#endif
-
- return true;
-}
-
-bool TreeToLLVM::EmitBuiltinEHReturn(gimple stmt, Value *&/*Result*/) {
- if (!validate_gimple_arglist(stmt, INTEGER_TYPE, POINTER_TYPE, VOID_TYPE))
- return false;
-
- const Type *IntPtr = TD.getIntPtrType(Context);
- Value *Offset = EmitMemory(gimple_call_arg(stmt, 0));
- Value *Handler = EmitMemory(gimple_call_arg(stmt, 1));
-
- Intrinsic::ID IID = IntPtr->isIntegerTy(32) ?
- Intrinsic::eh_return_i32 : Intrinsic::eh_return_i64;
-
- Offset = Builder.CreateIntCast(Offset, IntPtr, /*isSigned*/true);
- Handler = Builder.CreateBitCast(Handler, Type::getInt8PtrTy(Context));
-
- Value *Args[2] = { Offset, Handler };
- Builder.CreateCall(Intrinsic::getDeclaration(TheModule, IID), Args, Args + 2);
- Builder.CreateUnreachable();
- BeginBlock(BasicBlock::Create(Context));
-
- return true;
-}
-
-bool TreeToLLVM::EmitBuiltinInitDwarfRegSizes(gimple stmt, Value *&/*Result*/) {
-#ifdef DWARF2_UNWIND_INFO
- unsigned int i;
- bool wrote_return_column = false;
- static bool reg_modes_initialized = false;
-
- if (!validate_gimple_arglist(stmt, POINTER_TYPE, VOID_TYPE))
- return false;
-
- if (!reg_modes_initialized) {
- init_reg_modes_target();
- reg_modes_initialized = true;
- }
-
- Value *Addr =
- Builder.CreateBitCast(EmitMemory(gimple_call_arg(stmt, 0)),
- Type::getInt8PtrTy(Context));
- Constant *Size, *Idx;
-
- for (i = 0; i < FIRST_PSEUDO_REGISTER; i++) {
- int rnum = DWARF2_FRAME_REG_OUT (DWARF_FRAME_REGNUM (i), 1);
-
- if (rnum < DWARF_FRAME_REGISTERS) {
- enum machine_mode save_mode = reg_raw_mode[i];
- HOST_WIDE_INT size;
-
- if (HARD_REGNO_CALL_PART_CLOBBERED (i, save_mode))
- save_mode = choose_hard_reg_mode (i, 1, true);
- if (DWARF_FRAME_REGNUM (i) == DWARF_FRAME_RETURN_COLUMN) {
- if (save_mode == VOIDmode)
- continue;
- wrote_return_column = true;
- }
- size = GET_MODE_SIZE (save_mode);
- if (rnum < 0)
- continue;
-
- Size = ConstantInt::get(Type::getInt8Ty(Context), size);
- Idx = ConstantInt::get(Type::getInt32Ty(Context), rnum);
- Builder.CreateStore(Size, Builder.CreateGEP(Addr, Idx), false);
- }
- }
-
- if (!wrote_return_column) {
- Size = ConstantInt::get(Type::getInt8Ty(Context),
- GET_MODE_SIZE (Pmode));
- Idx = ConstantInt::get(Type::getInt32Ty(Context),
- DWARF_FRAME_RETURN_COLUMN);
- Builder.CreateStore(Size, Builder.CreateGEP(Addr, Idx), false);
- }
-
-#ifdef DWARF_ALT_FRAME_RETURN_COLUMN
- Size = ConstantInt::get(Type::getInt8Ty(Context),
- GET_MODE_SIZE (Pmode));
- Idx = ConstantInt::get(Type::getInt32Ty(Context),
- DWARF_ALT_FRAME_RETURN_COLUMN);
- Builder.CreateStore(Size, Builder.CreateGEP(Addr, Idx), false);
-#endif
-
-#endif /* DWARF2_UNWIND_INFO */
-
- // TODO: the RS6000 target needs extra initialization [gcc changeset 122468].
-
- return true;
-}
-
-bool TreeToLLVM::EmitBuiltinUnwindInit(gimple stmt, Value *&/*Result*/) {
- if (!validate_gimple_arglist(stmt, VOID_TYPE))
- return false;
-
- Builder.CreateCall(Intrinsic::getDeclaration(TheModule,
- Intrinsic::eh_unwind_init));
-
- return true;
-}
-
-bool TreeToLLVM::EmitBuiltinStackRestore(gimple stmt) {
- if (!validate_gimple_arglist(stmt, POINTER_TYPE, VOID_TYPE))
- return false;
-
- Value *Ptr = EmitMemory(gimple_call_arg(stmt, 0));
- Ptr = Builder.CreateBitCast(Ptr, Type::getInt8PtrTy(Context));
-
- Builder.CreateCall(Intrinsic::getDeclaration(TheModule,
- Intrinsic::stackrestore), Ptr);
- return true;
-}
-
-
-bool TreeToLLVM::EmitBuiltinAlloca(gimple stmt, Value *&Result) {
- if (!validate_gimple_arglist(stmt, INTEGER_TYPE, VOID_TYPE))
- return false;
- Value *Amt = EmitMemory(gimple_call_arg(stmt, 0));
- Result = Builder.CreateAlloca(Type::getInt8Ty(Context), Amt);
- return true;
-}
-
-bool TreeToLLVM::EmitBuiltinExpect(gimple stmt, Value *&Result) {
- // Ignore the hint for now, just expand the expr. This is safe, but not
- // optimal.
- Result = gimple_call_num_args(stmt) < 2 ?
- Constant::getNullValue(ConvertType(gimple_call_return_type(stmt))) :
- EmitMemory(gimple_call_arg(stmt, 0));
- return true;
-}
-
-bool TreeToLLVM::EmitBuiltinVAStart(gimple stmt) {
- if (gimple_call_num_args(stmt) < 2) {
- error_at (gimple_location(stmt),
- "too few arguments to function %<va_start%>");
- return true;
- }
-
- tree fntype = TREE_TYPE(current_function_decl);
- if (TYPE_ARG_TYPES(fntype) == 0 ||
- (tree_last(TYPE_ARG_TYPES(fntype)) == void_type_node)) {
- error("%<va_start%> used in function with fixed args");
- return true;
- }
-
- Constant *va_start = Intrinsic::getDeclaration(TheModule, Intrinsic::vastart);
- Value *ArgVal = EmitMemory(gimple_call_arg(stmt, 0));
- ArgVal = Builder.CreateBitCast(ArgVal, Type::getInt8PtrTy(Context));
- Builder.CreateCall(va_start, ArgVal);
- return true;
-}
-
-bool TreeToLLVM::EmitBuiltinVAEnd(gimple stmt) {
- Value *Arg = EmitMemory(gimple_call_arg(stmt, 0));
- Arg = Builder.CreateBitCast(Arg, Type::getInt8PtrTy(Context));
- Builder.CreateCall(Intrinsic::getDeclaration(TheModule, Intrinsic::vaend),
- Arg);
- return true;
-}
-
-bool TreeToLLVM::EmitBuiltinVACopy(gimple stmt) {
- tree Arg1T = gimple_call_arg(stmt, 0);
- tree Arg2T = gimple_call_arg(stmt, 1);
-
- Value *Arg1 = EmitMemory(Arg1T); // Emit the address of the destination.
- // The second arg of llvm.va_copy is a pointer to a valist.
- Value *Arg2;
- if (!AGGREGATE_TYPE_P(va_list_type_node)) {
- // Emit it as a value, then store it to a temporary slot.
- Value *V2 = EmitMemory(Arg2T);
- Arg2 = CreateTemporary(V2->getType());
- Builder.CreateStore(V2, Arg2);
- } else {
- // If the target has aggregate valists, then the second argument
- // from GCC is the address of the source valist and we don't
- // need to do anything special.
- Arg2 = EmitMemory(Arg2T);
- }
-
- static const Type *VPTy = Type::getInt8PtrTy(Context);
-
- // FIXME: This ignores alignment and volatility of the arguments.
- SmallVector<Value *, 2> Args;
- Args.push_back(Builder.CreateBitCast(Arg1, VPTy));
- Args.push_back(Builder.CreateBitCast(Arg2, VPTy));
-
- Builder.CreateCall(Intrinsic::getDeclaration(TheModule, Intrinsic::vacopy),
- Args.begin(), Args.end());
- return true;
-}
-
-bool TreeToLLVM::EmitBuiltinAdjustTrampoline(gimple stmt, Value *&Result) {
- if (!validate_gimple_arglist(stmt, POINTER_TYPE, VOID_TYPE))
- return false;
-
- const Type *ResultTy = ConvertType(gimple_call_return_type(stmt));
-
- // The adjusted value is stored as a pointer at the start of the storage GCC
- // allocated for the trampoline - load it out and return it.
- assert(TD.getPointerSize() <= TRAMPOLINE_SIZE &&
- "Trampoline smaller than a pointer!");
- Value *Tramp = EmitMemory(gimple_call_arg(stmt, 0));
- Tramp = Builder.CreateBitCast(Tramp, ResultTy->getPointerTo());
- Result = Builder.CreateLoad(Tramp, "adjusted");
-
- // The load has the alignment of the trampoline storage.
- unsigned Align = TYPE_ALIGN(TREE_TYPE(TREE_TYPE(gimple_call_arg(stmt, 0))))/8;
- cast<LoadInst>(Result)->setAlignment(Align);
-
- return true;
-}
-
-bool TreeToLLVM::EmitBuiltinInitTrampoline(gimple stmt, Value *&/*Result*/) {
- if (!validate_gimple_arglist(stmt, POINTER_TYPE, POINTER_TYPE, POINTER_TYPE,
- VOID_TYPE))
- return false;
-
- // LLVM's trampoline intrinsic, llvm.init.trampoline, combines the effect of
- // GCC's init_trampoline and adjust_trampoline. Calls to adjust_trampoline
- // should return the result of the llvm.init.trampoline call. This is tricky
- // because the adjust_trampoline and init_trampoline calls need not occur in
- // the same function. To overcome this, we don't store the trampoline machine
- // code in the storage GCC created for it, we store the result of the call to
- // llvm.init.trampoline there instead. Since this storage is the argument to
- // adjust_trampoline, we turn adjust_trampoline into a load from its argument.
- // The trampoline machine code itself is stored in a stack temporary that we
- // create (one for each init_trampoline) in the function where init_trampoline
- // is called.
- static const Type *VPTy = Type::getInt8PtrTy(Context);
-
- // Create a stack temporary to hold the trampoline machine code.
- const Type *TrampType = ArrayType::get(Type::getInt8Ty(Context),
- TRAMPOLINE_SIZE);
- AllocaInst *TrampTmp = CreateTemporary(TrampType);
- TrampTmp->setAlignment(TRAMPOLINE_ALIGNMENT);
- TrampTmp->setName("TRAMP");
-
- Value *Func = EmitMemory(gimple_call_arg(stmt, 1));
- Value *Chain = EmitMemory(gimple_call_arg(stmt, 2));
-
- Value *Ops[3] = {
- Builder.CreateBitCast(TrampTmp, VPTy),
- Builder.CreateBitCast(Func, VPTy),
- Builder.CreateBitCast(Chain, VPTy)
- };
-
- Function *Intr = Intrinsic::getDeclaration(TheModule,
- Intrinsic::init_trampoline);
- Value *Adjusted = Builder.CreateCall(Intr, Ops, Ops + 3, "adjusted");
-
- // Store the llvm.init.trampoline result to the GCC trampoline storage.
- assert(TD.getPointerSize() <= TRAMPOLINE_SIZE &&
- "Trampoline smaller than a pointer!");
- Value *Tramp = EmitMemory(gimple_call_arg(stmt, 0));
- Tramp = Builder.CreateBitCast(Tramp, Adjusted->getType()->getPointerTo());
- StoreInst *Store = Builder.CreateStore(Adjusted, Tramp);
-
- // The store has the alignment of the trampoline storage.
- unsigned Align = TYPE_ALIGN(TREE_TYPE(TREE_TYPE(gimple_call_arg(stmt, 0))))/8;
- Store->setAlignment(Align);
-
- // The GCC trampoline storage is constant from this point on. Tell this to
- // the optimizers.
- Intr = Intrinsic::getDeclaration(TheModule, Intrinsic::invariant_start);
- Ops[0] = ConstantInt::get(Type::getInt64Ty(Context), TRAMPOLINE_SIZE);
- Ops[1] = Builder.CreateBitCast(Tramp, VPTy);
- Builder.CreateCall(Intr, Ops, Ops + 2);
-
- return true;
-}
-
-//===----------------------------------------------------------------------===//
-// ... Complex Math Expressions ...
-//===----------------------------------------------------------------------===//
-
-Value *TreeToLLVM::CreateComplex(Value *Real, Value *Imag, tree elt_type) {
- assert(Real->getType() == Imag->getType() && "Component type mismatch!");
- Real = Reg2Mem(Real, elt_type, Builder);
- Imag = Reg2Mem(Imag, elt_type, Builder);
- const Type *EltTy = Real->getType();
- Value *Result = UndefValue::get(StructType::get(Context, EltTy, EltTy, NULL));
- Result = Builder.CreateInsertValue(Result, Real, 0);
- Result = Builder.CreateInsertValue(Result, Imag, 1);
- return Result;
-}
-
-void TreeToLLVM::SplitComplex(Value *Complex, Value *&Real, Value *&Imag,
- tree elt_type) {
- Real = Mem2Reg(Builder.CreateExtractValue(Complex, 0), elt_type, Builder);
- Imag = Mem2Reg(Builder.CreateExtractValue(Complex, 1), elt_type, Builder);
-}
-
-
-//===----------------------------------------------------------------------===//
-// ... L-Value Expressions ...
-//===----------------------------------------------------------------------===//
-
-Value *TreeToLLVM::EmitFieldAnnotation(Value *FieldPtr, tree FieldDecl) {
- tree AnnotateAttr = lookup_attribute("annotate", DECL_ATTRIBUTES(FieldDecl));
-
- const Type *SBP = Type::getInt8PtrTy(Context);
-
- Function *Fn = Intrinsic::getDeclaration(TheModule,
- Intrinsic::ptr_annotation,
- &SBP, 1);
-
- // Get file and line number. FIXME: Should this be for the decl or the
- // use. Is there a location info for the use?
- Constant *LineNo = ConstantInt::get(Type::getInt32Ty(Context),
- DECL_SOURCE_LINE(FieldDecl));
- Constant *File = ConvertMetadataStringToGV(DECL_SOURCE_FILE(FieldDecl));
-
- File = TheFolder->CreateBitCast(File, SBP);
-
- // There may be multiple annotate attributes. Pass return of lookup_attr
- // to successive lookups.
- while (AnnotateAttr) {
- // Each annotate attribute is a tree list.
- // Get value of list which is our linked list of args.
- tree args = TREE_VALUE(AnnotateAttr);
-
- // Each annotate attribute may have multiple args.
- // Treat each arg as if it were a separate annotate attribute.
- for (tree a = args; a; a = TREE_CHAIN(a)) {
- // Each element of the arg list is a tree list, so get value
- tree val = TREE_VALUE(a);
-
- // Assert its a string, and then get that string.
- assert(TREE_CODE(val) == STRING_CST &&
- "Annotate attribute arg should always be a string");
-
- Constant *strGV = EmitAddressOf(val);
-
- // We can not use the IRBuilder because it will constant fold away
- // the GEP that is critical to distinguish between an annotate
- // attribute on a whole struct from one on the first element of the
- // struct.
- BitCastInst *CastFieldPtr = new BitCastInst(FieldPtr, SBP,
- FieldPtr->getName());
- Builder.Insert(CastFieldPtr);
-
- Value *Ops[4] = {
- CastFieldPtr, Builder.CreateBitCast(strGV, SBP),
- File, LineNo
- };
-
- const Type* FieldPtrType = FieldPtr->getType();
- FieldPtr = Builder.CreateCall(Fn, Ops, Ops+4);
- FieldPtr = Builder.CreateBitCast(FieldPtr, FieldPtrType);
- }
-
- // Get next annotate attribute.
- AnnotateAttr = TREE_CHAIN(AnnotateAttr);
- if (AnnotateAttr)
- AnnotateAttr = lookup_attribute("annotate", AnnotateAttr);
- }
- return FieldPtr;
-}
-
-LValue TreeToLLVM::EmitLV_ARRAY_REF(tree exp) {
- // The result type is an ElementTy* in the case of an ARRAY_REF, an array
- // of ElementTy in the case of ARRAY_RANGE_REF.
-
- tree Array = TREE_OPERAND(exp, 0);
- tree ArrayTreeType = TREE_TYPE(Array);
- tree Index = TREE_OPERAND(exp, 1);
- tree IndexType = TREE_TYPE(Index);
- tree ElementType = TREE_TYPE(ArrayTreeType);
-
- assert(TREE_CODE (ArrayTreeType) == ARRAY_TYPE && "Unknown ARRAY_REF!");
-
- Value *ArrayAddr;
- unsigned ArrayAlign;
-
- // First subtract the lower bound, if any, in the type of the index.
- Value *IndexVal = EmitRegister(Index);
- tree LowerBound = array_ref_low_bound(exp);
- if (!integer_zerop(LowerBound))
- IndexVal = Builder.CreateSub(IndexVal, EmitRegister(LowerBound), "",
- hasNUW(TREE_TYPE(Index)),
- hasNSW(TREE_TYPE(Index)));
-
- LValue ArrayAddrLV = EmitLV(Array);
- assert(!ArrayAddrLV.isBitfield() && "Arrays cannot be bitfields!");
- ArrayAddr = ArrayAddrLV.Ptr;
- ArrayAlign = ArrayAddrLV.getAlignment();
-
- const Type *IntPtrTy = getTargetData().getIntPtrType(Context);
- IndexVal = Builder.CreateIntCast(IndexVal, IntPtrTy,
- /*isSigned*/!TYPE_UNSIGNED(IndexType));
-
- // If we are indexing over a fixed-size type, just use a GEP.
- if (isSequentialCompatible(ArrayTreeType)) {
- // Avoid any assumptions about how the array type is represented in LLVM by
- // doing the GEP on a pointer to the first array element.
- const Type *EltTy = ConvertType(ElementType);
- ArrayAddr = Builder.CreateBitCast(ArrayAddr, EltTy->getPointerTo());
- Value *Ptr = POINTER_TYPE_OVERFLOW_UNDEFINED ?
- Builder.CreateInBoundsGEP(ArrayAddr, IndexVal) :
- Builder.CreateGEP(ArrayAddr, IndexVal);
- unsigned Alignment = MinAlign(ArrayAlign, TD.getABITypeAlignment(EltTy));
- return LValue(Builder.CreateBitCast(Ptr,
- PointerType::getUnqual(ConvertType(TREE_TYPE(exp)))),
- Alignment);
- }
-
- // Otherwise, just do raw, low-level pointer arithmetic. FIXME: this could be
- // much nicer in cases like:
- // float foo(int w, float A[][w], int g) { return A[g][0]; }
-
- if (VOID_TYPE_P(TREE_TYPE(ArrayTreeType))) {
- ArrayAddr = Builder.CreateBitCast(ArrayAddr, Type::getInt8PtrTy(Context));
- ArrayAddr = POINTER_TYPE_OVERFLOW_UNDEFINED ?
- Builder.CreateInBoundsGEP(ArrayAddr, IndexVal) :
- Builder.CreateGEP(ArrayAddr, IndexVal);
- return LValue(ArrayAddr, 1);
- }
-
- // FIXME: Might also get here if the element type has constant size, but is
- // humongous. Add support for this case.
- assert(TREE_OPERAND(exp, 3) && "Size missing for variable sized element!");
- // ScaleFactor is the size of the element type in units divided by (exactly)
- // TYPE_ALIGN_UNIT(ElementType).
- Value *ScaleFactor = Builder.CreateIntCast(EmitRegister(TREE_OPERAND(exp, 3)),
- IntPtrTy, /*isSigned*/false);
- assert(isPowerOf2_32(TYPE_ALIGN(ElementType)) &&
- "Alignment not a power of two!");
- assert(TYPE_ALIGN(ElementType) >= 8 && "Unit size not a multiple of 8 bits!");
- // ScaleType is chosen to correct for the division in ScaleFactor.
- const Type *ScaleType = IntegerType::get(Context, TYPE_ALIGN(ElementType));
- ArrayAddr = Builder.CreateBitCast(ArrayAddr, ScaleType->getPointerTo());
-
- IndexVal = Builder.CreateMul(IndexVal, ScaleFactor);
- unsigned Alignment = MinAlign(ArrayAlign, TYPE_ALIGN(ElementType) / 8);
- Value *Ptr = POINTER_TYPE_OVERFLOW_UNDEFINED ?
- Builder.CreateInBoundsGEP(ArrayAddr, IndexVal) :
- Builder.CreateGEP(ArrayAddr, IndexVal);
- return LValue(Builder.CreateBitCast(Ptr,
- PointerType::getUnqual(ConvertType(TREE_TYPE(exp)))),
- Alignment);
-}
-
-LValue TreeToLLVM::EmitLV_BIT_FIELD_REF(tree exp) {
- LValue Ptr = EmitLV(TREE_OPERAND(exp, 0));
- assert(!Ptr.isBitfield() && "BIT_FIELD_REF operands cannot be bitfields!");
-
- unsigned BitStart = (unsigned)TREE_INT_CST_LOW(TREE_OPERAND(exp, 2));
- unsigned BitSize = (unsigned)TREE_INT_CST_LOW(TREE_OPERAND(exp, 1));
- const Type *ValTy = ConvertType(TREE_TYPE(exp));
-
- unsigned ValueSizeInBits = TD.getTypeSizeInBits(ValTy);
- assert(BitSize <= ValueSizeInBits &&
- "ValTy isn't large enough to hold the value loaded!");
-
- assert(ValueSizeInBits == TD.getTypeAllocSizeInBits(ValTy) &&
- "FIXME: BIT_FIELD_REF logic is broken for non-round types");
-
- // BIT_FIELD_REF values can have BitStart values that are quite large. We
- // know that the thing we are loading is ValueSizeInBits large. If BitStart
- // is larger than ValueSizeInBits, bump the pointer over to where it should
- // be.
- if (unsigned UnitOffset = BitStart / ValueSizeInBits) {
- // TODO: If Ptr.Ptr is a struct type or something, we can do much better
- // than this. e.g. check out when compiling unwind-dw2-fde-darwin.c.
- Ptr.Ptr = Builder.CreateBitCast(Ptr.Ptr, ValTy->getPointerTo());
- Ptr.Ptr = Builder.CreateGEP(Ptr.Ptr,
- ConstantInt::get(Type::getInt32Ty(Context),
- UnitOffset));
- BitStart -= UnitOffset*ValueSizeInBits;
- }
-
- // If this is referring to the whole field, return the whole thing.
- if (BitStart == 0 && BitSize == ValueSizeInBits) {
- return LValue(Builder.CreateBitCast(Ptr.Ptr, ValTy->getPointerTo()),
- Ptr.getAlignment());
- }
-
- return LValue(Builder.CreateBitCast(Ptr.Ptr, ValTy->getPointerTo()),
- 1, BitStart, BitSize);
-}
-
-LValue TreeToLLVM::EmitLV_COMPONENT_REF(tree exp) {
- LValue StructAddrLV = EmitLV(TREE_OPERAND(exp, 0));
- tree FieldDecl = TREE_OPERAND(exp, 1);
- unsigned LVAlign = StructAddrLV.getAlignment();
-
- assert((TREE_CODE(DECL_CONTEXT(FieldDecl)) == RECORD_TYPE ||
- TREE_CODE(DECL_CONTEXT(FieldDecl)) == UNION_TYPE ||
- TREE_CODE(DECL_CONTEXT(FieldDecl)) == QUAL_UNION_TYPE));
-
- const Type *StructTy = ConvertType(DECL_CONTEXT(FieldDecl));
-
- assert((!StructAddrLV.isBitfield() ||
- StructAddrLV.BitStart == 0) && "structs cannot be bitfields!");
-
- StructAddrLV.Ptr = Builder.CreateBitCast(StructAddrLV.Ptr,
- StructTy->getPointerTo());
- const Type *FieldTy = ConvertType(TREE_TYPE(FieldDecl));
-
- // BitStart - This is the actual offset of the field from the start of the
- // struct, in bits. For bitfields this may be on a non-byte boundary.
- unsigned BitStart;
- Value *FieldPtr;
-
- // If the GCC field directly corresponds to an LLVM field, handle it.
- unsigned MemberIndex = GetFieldIndex(FieldDecl, StructTy);
- if (MemberIndex < INT_MAX) {
- assert(!TREE_OPERAND(exp, 2) && "Constant not gimple min invariant?");
- // Get a pointer to the byte in which the GCC field starts.
- FieldPtr = Builder.CreateStructGEP(StructAddrLV.Ptr, MemberIndex);
- // Within that byte, the bit at which the GCC field starts.
- BitStart = TREE_INT_CST_LOW(DECL_FIELD_BIT_OFFSET(TREE_OPERAND(exp, 1)));
- BitStart &= 7;
- } else {
- // Offset will hold the field offset in octets.
- Value *Offset;
-
- assert(!(BITS_PER_UNIT & 7) && "Unit size not a multiple of 8 bits!");
- if (TREE_OPERAND(exp, 2)) {
- Offset = EmitRegister(TREE_OPERAND(exp, 2));
- // At this point the offset is measured in units divided by (exactly)
- // (DECL_OFFSET_ALIGN / BITS_PER_UNIT). Convert to octets.
- unsigned factor = DECL_OFFSET_ALIGN(FieldDecl) / 8;
- if (factor != 1)
- Offset = Builder.CreateMul(Offset,
- ConstantInt::get(Offset->getType(), factor));
- } else {
- assert(DECL_FIELD_OFFSET(FieldDecl) && "Field offset not available!");
- Offset = EmitRegister(DECL_FIELD_OFFSET(FieldDecl));
- // At this point the offset is measured in units. Convert to octets.
- unsigned factor = BITS_PER_UNIT / 8;
- if (factor != 1)
- Offset = Builder.CreateMul(Offset,
- ConstantInt::get(Offset->getType(), factor));
- }
-
- // Here BitStart gives the offset of the field in bits from Offset.
- BitStart = getInt64(DECL_FIELD_BIT_OFFSET(FieldDecl), true);
-
- // Incorporate as much of it as possible into the pointer computation.
- unsigned ByteOffset = BitStart / 8;
- if (ByteOffset > 0) {
- Offset = Builder.CreateAdd(Offset,
- ConstantInt::get(Offset->getType(), ByteOffset));
- BitStart -= ByteOffset*8;
- }
-
- const Type *BytePtrTy = Type::getInt8PtrTy(Context);
- FieldPtr = Builder.CreateBitCast(StructAddrLV.Ptr, BytePtrTy);
- FieldPtr = Builder.CreateInBoundsGEP(FieldPtr, Offset);
- FieldPtr = Builder.CreateBitCast(FieldPtr, FieldTy->getPointerTo());
- }
-
- assert(BitStart < 8 && "Bit offset not properly incorporated in the pointer");
-
- // The alignment is given by DECL_ALIGN. Be conservative and don't assume
- // that the field is properly aligned even if the type is not.
- LVAlign = MinAlign(LVAlign, DECL_ALIGN(FieldDecl) / 8);
-
- // If the FIELD_DECL has an annotate attribute on it, emit it.
- if (lookup_attribute("annotate", DECL_ATTRIBUTES(FieldDecl)))
- FieldPtr = EmitFieldAnnotation(FieldPtr, FieldDecl);
-
- if (!isBitfield(FieldDecl)) {
- assert(BitStart == 0 && "Not a bitfield but not at a byte offset!");
- // Make sure we return a pointer to the right type.
- const Type *EltTy = ConvertType(TREE_TYPE(exp));
- FieldPtr = Builder.CreateBitCast(FieldPtr, EltTy->getPointerTo());
- return LValue(FieldPtr, LVAlign);
- }
-
- // If this is a bitfield, the declared type must be an integral type.
- assert(FieldTy->isIntegerTy() && "Invalid bitfield");
-
- assert(DECL_SIZE(FieldDecl) &&
- TREE_CODE(DECL_SIZE(FieldDecl)) == INTEGER_CST &&
- "Variable sized bitfield?");
- unsigned BitfieldSize = TREE_INT_CST_LOW(DECL_SIZE(FieldDecl));
-
- const Type *LLVMFieldTy =
- cast<PointerType>(FieldPtr->getType())->getElementType();
-
- // If the LLVM notion of the field type contains the entire bitfield being
- // accessed, use the LLVM type. This avoids pointer casts and other bad
- // things that are difficult to clean up later. This occurs in cases like
- // "struct X{ unsigned long long x:50; unsigned y:2; }" when accessing y.
- // We want to access the field as a ulong, not as a uint with an offset.
- if (LLVMFieldTy->isIntegerTy() &&
- LLVMFieldTy->getPrimitiveSizeInBits() >= BitStart + BitfieldSize &&
- LLVMFieldTy->getPrimitiveSizeInBits() ==
- TD.getTypeAllocSizeInBits(LLVMFieldTy))
- FieldTy = LLVMFieldTy;
- else
- // If the field result type T is a bool or some other curiously sized
- // integer type, then not all bits may be accessible by advancing a T*
- // and loading through it. For example, if the result type is i1 then
- // only the first bit in each byte would be loaded. Even if T is byte
- // sized like an i24 there may be trouble: incrementing a T* will move
- // the position by 32 bits not 24, leaving the upper 8 of those 32 bits
- // inaccessible. Avoid this by rounding up the size appropriately.
- FieldTy = IntegerType::get(Context, TD.getTypeAllocSizeInBits(FieldTy));
-
- assert(FieldTy->getPrimitiveSizeInBits() ==
- TD.getTypeAllocSizeInBits(FieldTy) && "Field type not sequential!");
-
- // If this is a bitfield, the field may span multiple fields in the LLVM
- // type. As such, cast the pointer to be a pointer to the declared type.
- FieldPtr = Builder.CreateBitCast(FieldPtr, FieldTy->getPointerTo());
-
- unsigned LLVMValueBitSize = FieldTy->getPrimitiveSizeInBits();
- // Finally, because bitfields can span LLVM fields, and because the start
- // of the first LLVM field (where FieldPtr currently points) may be up to
- // 63 bits away from the start of the bitfield), it is possible that
- // *FieldPtr doesn't contain any of the bits for this bitfield. If needed,
- // adjust FieldPtr so that it is close enough to the bitfield that
- // *FieldPtr contains the first needed bit. Be careful to make sure that
- // the pointer remains appropriately aligned.
- if (BitStart >= LLVMValueBitSize) {
- // In this case, we know that the alignment of the field is less than
- // the size of the field. To get the pointer close enough, add some
- // number of alignment units to the pointer.
- unsigned ByteAlignment = TD.getABITypeAlignment(FieldTy);
- // It is possible that an individual field is Packed. This information is
- // not reflected in FieldTy. Check DECL_PACKED here.
- if (DECL_PACKED(FieldDecl))
- ByteAlignment = 1;
- assert(ByteAlignment*8 <= LLVMValueBitSize && "Unknown overlap case!");
- unsigned NumAlignmentUnits = BitStart/(ByteAlignment*8);
- assert(NumAlignmentUnits && "Not adjusting pointer?");
-
- // Compute the byte offset, and add it to the pointer.
- unsigned ByteOffset = NumAlignmentUnits*ByteAlignment;
- LVAlign = MinAlign(LVAlign, ByteOffset);
-
- Constant *Offset = ConstantInt::get(TD.getIntPtrType(Context), ByteOffset);
- FieldPtr = Builder.CreatePtrToInt(FieldPtr, Offset->getType());
- FieldPtr = Builder.CreateAdd(FieldPtr, Offset);
- FieldPtr = Builder.CreateIntToPtr(FieldPtr, FieldTy->getPointerTo());
-
- // Adjust bitstart to account for the pointer movement.
- BitStart -= ByteOffset*8;
-
- // Check that this worked. Note that the bitfield may extend beyond
- // the end of *FieldPtr, for example because BitfieldSize is the same
- // as LLVMValueBitSize but BitStart > 0.
- assert(BitStart < LLVMValueBitSize &&
- BitStart+BitfieldSize < 2*LLVMValueBitSize &&
- "Couldn't get bitfield into value!");
- }
-
- // Okay, everything is good. Return this as a bitfield if we can't
- // return it as a normal l-value. (e.g. "struct X { int X : 32 };" ).
- LValue LV(FieldPtr, LVAlign);
- if (BitfieldSize != LLVMValueBitSize || BitStart != 0) {
- // Writing these fields directly rather than using the appropriate LValue
- // constructor works around a miscompilation by gcc-4.4 in Release mode.
- LV.BitStart = BitStart;
- LV.BitSize = BitfieldSize;
- }
- return LV;
-}
-
-LValue TreeToLLVM::EmitLV_DECL(tree exp) {
- Value *Decl = DEFINITION_LOCAL(exp);
- if (Decl == 0) {
- if (errorcount || sorrycount) {
- const Type *Ty = ConvertType(TREE_TYPE(exp));
- const PointerType *PTy = Ty->getPointerTo();
- LValue LV(ConstantPointerNull::get(PTy), 1);
- return LV;
- }
- llvm_unreachable("Referencing decl that hasn't been laid out");
- }
-
- const Type *Ty = ConvertType(TREE_TYPE(exp));
- // If we have "extern void foo", make the global have type {} instead of
- // type void.
- if (Ty->isVoidTy()) Ty = StructType::get(Context);
- const PointerType *PTy = Ty->getPointerTo();
- unsigned Alignment = Ty->isSized() ? TD.getABITypeAlignment(Ty) : 1;
- if (DECL_ALIGN(exp)) {
- if (DECL_USER_ALIGN(exp) || 8 * Alignment < (unsigned)DECL_ALIGN(exp))
- Alignment = DECL_ALIGN(exp) / 8;
- }
-
- return LValue(Builder.CreateBitCast(Decl, PTy), Alignment);
-}
-
-LValue TreeToLLVM::EmitLV_INDIRECT_REF(tree exp) {
- // The lvalue is just the address.
- LValue LV = LValue(EmitRegister(TREE_OPERAND(exp, 0)), expr_align(exp) / 8);
- // May need a useless type conversion (useless_type_conversion_p), for example
- // when INDIRECT_REF is applied to a void*, resulting in a non-void type.
- LV.Ptr = UselesslyTypeConvert(LV.Ptr,
- ConvertType(TREE_TYPE(exp))->getPointerTo());
- return LV;
-}
-
-LValue TreeToLLVM::EmitLV_VIEW_CONVERT_EXPR(tree exp) {
- // The address is the address of the operand.
- LValue LV = EmitLV(TREE_OPERAND(exp, 0));
- // The type is the type of the expression.
- LV.Ptr = Builder.CreateBitCast(LV.Ptr,
- ConvertType(TREE_TYPE(exp))->getPointerTo());
- return LV;
-}
-
-LValue TreeToLLVM::EmitLV_WITH_SIZE_EXPR(tree exp) {
- // The address is the address of the operand.
- return EmitLV(TREE_OPERAND(exp, 0));
-}
-
-LValue TreeToLLVM::EmitLV_XXXXPART_EXPR(tree exp, unsigned Idx) {
- LValue Ptr = EmitLV(TREE_OPERAND(exp, 0));
- assert(!Ptr.isBitfield() &&
- "REALPART_EXPR / IMAGPART_EXPR operands cannot be bitfields!");
- unsigned Alignment;
- if (Idx == 0)
- // REALPART alignment is same as the complex operand.
- Alignment = Ptr.getAlignment();
- else
- // IMAGPART alignment = MinAlign(Ptr.Alignment, sizeof field);
- Alignment = MinAlign(Ptr.getAlignment(),
- TD.getTypeAllocSize(Ptr.Ptr->getType()));
- return LValue(Builder.CreateStructGEP(Ptr.Ptr, Idx), Alignment);
-}
-
-LValue TreeToLLVM::EmitLV_SSA_NAME(tree exp) {
- // TODO: Check the ssa name is being used as an rvalue, see EmitLoadOfLValue.
- Value *Temp = CreateTemporary(ConvertType(TREE_TYPE(exp)));
- Builder.CreateStore(EmitReg_SSA_NAME(exp), Temp);
- return LValue(Temp, 1);
-}
-
-LValue TreeToLLVM::EmitLV_TARGET_MEM_REF(tree exp) {
- // TODO: Take the address space into account.
- // TODO: Improve the alignment estimate.
- struct mem_address addr;
- get_address_description (exp, &addr);
-
- LValue Ref;
- Value *Delta = 0; // Offset from base pointer in units
- if (addr.symbol) {
- Ref = EmitLV(addr.symbol);
- if (addr.base && !integer_zerop (addr.base))
- Delta = EmitRegister(addr.base);
- } else {
- assert(addr.base && "TARGET_MEM_REF has neither base nor symbol!");
- Ref = LValue(EmitRegister(addr.base), 1);
- }
-
- if (addr.index) {
- Value *Index = EmitRegister(addr.index);
- if (addr.step && !integer_onep (addr.step))
- Index = Builder.CreateMul(Index, EmitRegisterConstant(addr.step));
- Delta = Delta ? Builder.CreateAdd(Delta, Index) : Index;
- }
-
- if (addr.offset && !integer_zerop (addr.offset)) {
- Constant *Offset = EmitRegisterConstant(addr.offset);
- Delta = Delta ? Builder.CreateAdd(Delta, Offset) : Offset;
- }
-
- if (Delta) {
- // Advance the base pointer by the given number of units.
- Ref.Ptr = Builder.CreateBitCast(Ref.Ptr, GetUnitPointerType(Context));
- Ref.Ptr = POINTER_TYPE_OVERFLOW_UNDEFINED ?
- Builder.CreateInBoundsGEP(Ref.Ptr, Delta)
- : Builder.CreateGEP(Ref.Ptr, Delta);
- Ref.setAlignment(1); // Let the optimizers compute the alignment.
- }
-
- // The result can be of a different pointer type even if we didn't advance it.
- Ref.Ptr = UselesslyTypeConvert(Ref.Ptr,
- GetRegType(TREE_TYPE(exp))->getPointerTo());
-
- return Ref;
-}
-
-Constant *TreeToLLVM::EmitLV_LABEL_DECL(tree exp) {
- return BlockAddress::get(Fn, getLabelDeclBlock(exp));
-}
-
-
-//===----------------------------------------------------------------------===//
-// ... Emit helpers ...
-//===----------------------------------------------------------------------===//
-
-/// EmitMinInvariant - The given value is constant in this function. Return the
-/// corresponding LLVM value. Only creates code in the entry block.
-Value *TreeToLLVM::EmitMinInvariant(tree reg) {
- Value *V = (TREE_CODE(reg) == ADDR_EXPR) ?
- EmitInvariantAddress(reg) : EmitRegisterConstant(reg);
- assert(V->getType() == GetRegType(TREE_TYPE(reg)) &&
- "Gimple min invariant has wrong type!");
- return V;
-}
-
-/// EmitInvariantAddress - The given address is constant in this function.
-/// Return the corresponding LLVM value. Only creates code in the entry block.
-Value *TreeToLLVM::EmitInvariantAddress(tree addr) {
- assert(is_gimple_invariant_address(addr) &&
- "Expected a locally constant address!");
- assert(is_gimple_reg_type(TREE_TYPE(addr)) && "Not of register type!");
-
- // Any generated code goes in the entry block.
- BasicBlock *EntryBlock = Fn->begin();
-
- // Note the current builder position.
- BasicBlock *SavedInsertBB = Builder.GetInsertBlock();
- BasicBlock::iterator SavedInsertPoint = Builder.GetInsertPoint();
-
- // Pop the entry block terminator. There may not be a terminator if we are
- // recursing or if the entry block was not yet finished.
- Instruction *Terminator = EntryBlock->getTerminator();
- assert(((SavedInsertBB != EntryBlock && Terminator) ||
- (SavedInsertPoint == EntryBlock->end() && !Terminator)) &&
- "Insertion point doesn't make sense!");
- if (Terminator)
- Terminator->removeFromParent();
-
- // Point the builder at the end of the entry block.
- Builder.SetInsertPoint(EntryBlock);
-
- // Calculate the address.
- assert(TREE_CODE(addr) == ADDR_EXPR && "Invariant address not ADDR_EXPR!");
- Value *Address = EmitADDR_EXPR(addr);
-
- // Restore the entry block terminator.
- if (Terminator)
- EntryBlock->getInstList().push_back(Terminator);
-
- // Restore the builder insertion point.
- if (SavedInsertBB != EntryBlock)
- Builder.SetInsertPoint(SavedInsertBB, SavedInsertPoint);
-
- assert(Address->getType() == GetRegType(TREE_TYPE(addr)) &&
- "Invariant address has wrong type!");
- return Address;
-}
-
-/// EmitRegisterConstant - Convert the given global constant of register type to
-/// an LLVM constant. Creates no code, only constants.
-Constant *TreeToLLVM::EmitRegisterConstant(tree reg) {
-#ifndef NDEBUG
- if (!is_gimple_constant(reg)) {
- debug_tree(reg);
- llvm_unreachable("Not a gimple constant!");
- }
-#endif
- assert(is_gimple_reg_type(TREE_TYPE(reg)) && "Not of register type!");
-
- switch (TREE_CODE(reg)) {
- default:
- debug_tree(reg);
- llvm_unreachable("Unhandled GIMPLE constant!");
-
- case INTEGER_CST:
- return EmitIntegerRegisterConstant(reg);
- case REAL_CST:
- return EmitRealRegisterConstant(reg);
- //case FIXED_CST: // Fixed point constant - not yet supported.
- //case STRING_CST: // Allowed by is_gimple_constant, but no known examples.
- case COMPLEX_CST:
- return EmitComplexRegisterConstant(reg);
- case VECTOR_CST:
- return EmitVectorRegisterConstant(reg);
- case CONSTRUCTOR:
- // Vector constant constructors are gimple invariant. See GCC testcase
- // pr34856.c for an example.
- return EmitConstantVectorConstructor(reg);
- }
-}
-
-/// EncodeExpr - Write the given expression into Buffer as it would appear in
-/// memory on the target (the buffer is resized to contain exactly the bytes
-/// written). Return the number of bytes written; this can also be obtained
-/// by querying the buffer's size.
-/// The following kinds of expressions are currently supported: INTEGER_CST,
-/// REAL_CST, COMPLEX_CST, VECTOR_CST, STRING_CST.
-static unsigned EncodeExpr(tree exp, SmallVectorImpl<unsigned char> &Buffer) {
- const tree type = TREE_TYPE(exp);
- unsigned SizeInBytes = (TREE_INT_CST_LOW(TYPE_SIZE(type)) + 7) / 8;
- Buffer.resize(SizeInBytes);
- unsigned BytesWritten = native_encode_expr(exp, &Buffer[0], SizeInBytes);
- assert(BytesWritten == SizeInBytes && "Failed to fully encode expression!");
- return BytesWritten;
-}
-
-/// EmitComplexRegisterConstant - Turn the given COMPLEX_CST into an LLVM
-/// constant of the corresponding register type.
-Constant *TreeToLLVM::EmitComplexRegisterConstant(tree reg) {
- Constant *Elts[2] = {
- EmitRegisterConstant(TREE_REALPART(reg)),
- EmitRegisterConstant(TREE_IMAGPART(reg))
- };
- return ConstantStruct::get(Context, Elts, 2, false);
-}
-
-/// EmitIntegerRegisterConstant - Turn the given INTEGER_CST into an LLVM
-/// constant of the corresponding register type.
-Constant *TreeToLLVM::EmitIntegerRegisterConstant(tree reg) {
- unsigned Precision = TYPE_PRECISION(TREE_TYPE(reg));
-
- ConstantInt *CI;
- if (HOST_BITS_PER_WIDE_INT < integerPartWidth) {
- assert(2 * HOST_BITS_PER_WIDE_INT <= integerPartWidth &&
- "Unsupported host integer precision!");
- unsigned ShiftAmt = HOST_BITS_PER_WIDE_INT;
- integerPart Val = (integerPart)(unsigned HOST_WIDE_INT)TREE_INT_CST_LOW(reg)
- + ((integerPart)(unsigned HOST_WIDE_INT)TREE_INT_CST_HIGH(reg) << ShiftAmt);
- CI = ConstantInt::get(Context, APInt(Precision, Val));
- } else {
- assert(HOST_BITS_PER_WIDE_INT == integerPartWidth &&
- "The engines cannae' take it captain!");
- integerPart Parts[] = { TREE_INT_CST_LOW(reg), TREE_INT_CST_HIGH(reg) };
- CI = ConstantInt::get(Context, APInt(Precision, 2, Parts));
- }
-
- // The destination can be a pointer, integer or floating point type so we need
- // a generalized cast here
- const Type *Ty = GetRegType(TREE_TYPE(reg));
- Instruction::CastOps opcode = CastInst::getCastOpcode(CI, false, Ty,
- !TYPE_UNSIGNED(TREE_TYPE(reg)));
- return TheFolder->CreateCast(opcode, CI, Ty);
-}
-
-/// EmitRealRegisterConstant - Turn the given REAL_CST into an LLVM constant
-/// of the corresponding register type.
-Constant *TreeToLLVM::EmitRealRegisterConstant(tree reg) {
- // TODO: Rather than going through memory, construct the APFloat directly from
- // the real_value. This works fine for zero, inf and nan values, but APFloat
- // has no constructor for normal numbers, i.e. constructing a normal number
- // from the exponent and significand.
- // TODO: Test implementation on a big-endian machine.
-
- // Encode the constant in Buffer in target format.
- SmallVector<unsigned char, 16> Buffer;
- EncodeExpr(reg, Buffer);
-
- // Discard any alignment padding, which we assume comes at the end.
- unsigned Precision = TYPE_PRECISION(TREE_TYPE(reg));
- assert((Precision & 7) == 0 && "Unsupported real number precision!");
- Buffer.resize(Precision / 8);
-
- // We are going to view the buffer as an array of APInt words. Ensure that
- // the buffer contains a whole number of words by extending it if necessary.
- unsigned Words = (Precision + integerPartWidth - 1) / integerPartWidth;
- // On a little-endian machine extend the buffer by adding bytes to the end.
- Buffer.resize(Words * (integerPartWidth / 8));
- // On a big-endian machine extend the buffer by adding bytes to the beginning.
- if (BYTES_BIG_ENDIAN)
- std::copy_backward(Buffer.begin(), Buffer.begin() + Precision / 8,
- Buffer.end());
-
- // Ensure that the least significant word comes first: we are going to make an
- // APInt, and the APInt constructor wants the least significant word first.
- integerPart *Parts = (integerPart *)&Buffer[0];
- if (BYTES_BIG_ENDIAN)
- std::reverse(Parts, Parts + Words);
-
- bool isPPC_FP128 = ConvertType(TREE_TYPE(reg))->isPPC_FP128Ty();
- if (isPPC_FP128) {
- // This type is actually a pair of doubles in disguise. They turn up the
- // wrong way round here, so flip them.
- assert(FLOAT_WORDS_BIG_ENDIAN && "PPC not big endian!");
- assert(Words == 2 && Precision == 128 && "Strange size for PPC_FP128!");
- std::swap(Parts[0], Parts[1]);
- }
-
- // Form an APInt from the buffer, an APFloat from the APInt, and the desired
- // floating point constant from the APFloat, phew!
- const APInt &I = APInt(Precision, Words, Parts);
- return ConstantFP::get(Context, APFloat(I, !isPPC_FP128));
-}
-
-/// EmitConstantVectorConstructor - Turn the given constant CONSTRUCTOR into
-/// an LLVM constant of the corresponding vector register type.
-Constant *TreeToLLVM::EmitConstantVectorConstructor(tree reg) {
- Constant *C = ConvertConstant(reg);
- return Mem2Reg(C, TREE_TYPE(reg), *TheFolder);
-}
-
-/// EmitVectorRegisterConstant - Turn the given VECTOR_CST into an LLVM constant
-/// of the corresponding register type.
-Constant *TreeToLLVM::EmitVectorRegisterConstant(tree reg) {
- // If there are no elements then immediately return the default value for a
- // small speedup.
- if (!TREE_VECTOR_CST_ELTS(reg))
- return getDefaultValue(GetRegType(TREE_TYPE(reg)));
-
- // Convert the elements.
- SmallVector<Constant*, 8> Elts;
- for (tree elt = TREE_VECTOR_CST_ELTS(reg); elt; elt = TREE_CHAIN(elt))
- Elts.push_back(EmitRegisterConstant(TREE_VALUE(elt)));
-
- // If there weren't enough elements then set the rest of the vector to the
- // default value.
- if (Elts.size() < TYPE_VECTOR_SUBPARTS(TREE_TYPE(reg))) {
- Constant *Default = getDefaultValue(GetRegType(TREE_TYPE(TREE_TYPE(reg))));
- Elts.append(TYPE_VECTOR_SUBPARTS(TREE_TYPE(reg)) - Elts.size(), Default);
- }
-
- return ConstantVector::get(Elts);
-}
-
-/// Mem2Reg - Convert a value of in-memory type (that given by ConvertType)
-/// to in-register type (that given by GetRegType).
-Value *TreeToLLVM::Mem2Reg(Value *V, tree type, LLVMBuilder &Builder) {
- const Type *MemTy = V->getType();
- const Type *RegTy = GetRegType(type);
- assert(MemTy == ConvertType(type) && "Not of memory type!");
-
- if (MemTy == RegTy)
- return V;
-
- assert(RegTy->isIntegerTy() && MemTy->isIntegerTy() &&
- "Unexpected type mismatch!");
- return Builder.CreateIntCast(V, RegTy, /*isSigned*/!TYPE_UNSIGNED(type));
-}
-Constant *TreeToLLVM::Mem2Reg(Constant *C, tree type, TargetFolder &Folder) {
- const Type *MemTy = C->getType();
- const Type *RegTy = GetRegType(type);
- assert(MemTy == ConvertType(type) && "Not of memory type!");
-
- if (MemTy == RegTy)
- return C;
-
- assert(RegTy->isIntegerTy() && MemTy->isIntegerTy() &&
- "Unexpected type mismatch!");
- return Folder.CreateIntCast(C, RegTy, /*isSigned*/!TYPE_UNSIGNED(type));
-}
-
-/// Reg2Mem - Convert a value of in-register type (that given by GetRegType)
-/// to in-memory type (that given by ConvertType).
-Value *TreeToLLVM::Reg2Mem(Value *V, tree type, LLVMBuilder &Builder) {
- const Type *RegTy = V->getType();
- const Type *MemTy = ConvertType(type);
- assert(RegTy == GetRegType(type) && "Not of register type!");
-
- if (RegTy == MemTy)
- return V;
-
- assert(RegTy->isIntegerTy() && MemTy->isIntegerTy() &&
- "Unexpected type mismatch!");
- return Builder.CreateIntCast(V, MemTy, /*isSigned*/!TYPE_UNSIGNED(type));
-}
-
-/// LoadRegisterFromMemory - Loads a value of the given scalar GCC type from
-/// the memory location pointed to by Loc. Takes care of adjusting for any
-/// differences between in-memory and in-register types (the returned value
-/// is of in-register type, as returned by GetRegType).
-Value *TreeToLLVM::LoadRegisterFromMemory(MemRef Loc, tree type,
- LLVMBuilder &Builder) {
- const Type *MemTy = ConvertType(type);
- Value *Ptr = Builder.CreateBitCast(Loc.Ptr, MemTy->getPointerTo());
- LoadInst *LI = Builder.CreateLoad(Ptr, Loc.Volatile);
- LI->setAlignment(Loc.getAlignment());
- return Mem2Reg(LI, type, Builder);
-}
-
-/// StoreRegisterToMemory - Stores the given value to the memory pointed to by
-/// Loc. Takes care of adjusting for any differences between the value's type
-/// (which is the in-register type given by GetRegType) and the in-memory type.
-void TreeToLLVM::StoreRegisterToMemory(Value *V, MemRef Loc, tree type,
- LLVMBuilder &Builder) {
- const Type *MemTy = ConvertType(type);
- Value *Ptr = Builder.CreateBitCast(Loc.Ptr, MemTy->getPointerTo());
- StoreInst *SI = Builder.CreateStore(Reg2Mem(V, type, Builder), Ptr,
- Loc.Volatile);
- SI->setAlignment(Loc.getAlignment());
-}
-
-
-//===----------------------------------------------------------------------===//
-// ... EmitReg* - Convert register expression to LLVM...
-//===----------------------------------------------------------------------===//
-
-/// GetRegType - Returns the LLVM type to use for registers that hold a value
-/// of the scalar GCC type 'type'. All of the EmitReg* routines use this to
-/// determine the LLVM type to return.
-const Type *TreeToLLVM::GetRegType(tree type) {
- assert(!AGGREGATE_TYPE_P(type) && "Registers must have a scalar type!");
- assert(TREE_CODE(type) != VOID_TYPE && "Registers cannot have void type!");
-
- // For integral types, convert based on the type precision.
- if (TREE_CODE(type) == BOOLEAN_TYPE || TREE_CODE(type) == ENUMERAL_TYPE ||
- TREE_CODE(type) == INTEGER_TYPE)
- return IntegerType::get(Context, TYPE_PRECISION(type));
-
- // Otherwise, return the type used to represent memory.
- return ConvertType(type);
-}
-
-/// EmitMemory - Convert the specified gimple register or local constant of
-/// register type to an LLVM value with in-memory type (given by ConvertType).
-Value *TreeToLLVM::EmitMemory(tree reg) {
- return Reg2Mem(EmitRegister(reg), TREE_TYPE(reg), Builder);
-}
-
-/// EmitRegister - Convert the specified gimple register or local constant of
-/// register type to an LLVM value. Only creates code in the entry block.
-Value *TreeToLLVM::EmitRegister(tree reg) {
- while (TREE_CODE(reg) == OBJ_TYPE_REF) reg = OBJ_TYPE_REF_EXPR(reg);
- return (TREE_CODE(reg) == SSA_NAME) ?
- EmitReg_SSA_NAME(reg) : EmitMinInvariant(reg);
-}
-
-/// EmitReg_SSA_NAME - Return the defining value of the given SSA_NAME.
-/// Only creates code in the entry block.
-Value *TreeToLLVM::EmitReg_SSA_NAME(tree reg) {
- assert(is_gimple_reg_type(TREE_TYPE(reg)) && "Not of register type!");
-
- // If we already found the definition of the SSA name, return it.
- if (Value *ExistingValue = SSANames[reg]) {
- assert(ExistingValue->getType() == GetRegType(TREE_TYPE(reg)) &&
- "SSA name has wrong type!");
- if (!isSSAPlaceholder(ExistingValue))
- return ExistingValue;
- }
-
- // If this is not the definition of the SSA name, return a placeholder value.
- if (!SSA_NAME_IS_DEFAULT_DEF(reg)) {
- if (Value *ExistingValue = SSANames[reg])
- return ExistingValue; // The type was sanity checked above.
- return SSANames[reg] = GetSSAPlaceholder(GetRegType(TREE_TYPE(reg)));
- }
-
- // This SSA name is the default definition for the underlying symbol.
-
- // The underlying symbol is an SSA variable.
- tree var = SSA_NAME_VAR(reg);
- assert(SSA_VAR_P(var) && "Not an SSA variable!");
-
- // If the variable is itself an ssa name, use its LLVM value.
- if (TREE_CODE (var) == SSA_NAME) {
- Value *Val = EmitReg_SSA_NAME(var);
- assert(Val->getType() == GetRegType(TREE_TYPE(reg)) &&
- "SSA name has wrong type!");
- return DefineSSAName(reg, Val);
- }
-
- // Otherwise the symbol is a VAR_DECL, PARM_DECL or RESULT_DECL. Since a
- // default definition is only created if the very first reference to the
- // variable in the function is a read operation, and refers to the value
- // read, it has an undefined value except for PARM_DECLs.
- if (TREE_CODE(var) != PARM_DECL)
- return DefineSSAName(reg, UndefValue::get(GetRegType(TREE_TYPE(reg))));
-
- // Read the initial value of the parameter and associate it with the ssa name.
- assert(DECL_LOCAL_IF_SET(var) && "Parameter not laid out?");
-
- unsigned Alignment = DECL_ALIGN(var);
- assert(Alignment != 0 && "Parameter with unknown alignment!");
-
- // Perform the load in the entry block, after all parameters have been set up
- // with their initial values, and before any modifications to their values.
-
- // Create a builder that inserts code before the SSAInsertionPoint marker.
- LLVMBuilder SSABuilder(Context, Builder.getFolder());
- SSABuilder.SetInsertPoint(SSAInsertionPoint->getParent(), SSAInsertionPoint);
-
- // Use it to load the parameter value.
- MemRef ParamLoc(DECL_LOCAL_IF_SET(var), Alignment, false);
- Value *Def = LoadRegisterFromMemory(ParamLoc, TREE_TYPE(reg), SSABuilder);
-
- if (flag_verbose_asm)
- NameValue(Def, reg);
- return DefineSSAName(reg, Def);
-}
-
-// Unary expressions.
-Value *TreeToLLVM::EmitReg_ABS_EXPR(tree op) {
- if (!FLOAT_TYPE_P(TREE_TYPE(op))) {
- Value *Op = EmitRegister(op);
- Value *OpN = Builder.CreateNeg(Op, Op->getName()+"neg");
- ICmpInst::Predicate pred = TYPE_UNSIGNED(TREE_TYPE(op)) ?
- ICmpInst::ICMP_UGE : ICmpInst::ICMP_SGE;
- Value *Cmp = Builder.CreateICmp(pred, Op,
- Constant::getNullValue(Op->getType()), "abscond");
- return Builder.CreateSelect(Cmp, Op, OpN, Op->getName()+"abs");
- }
-
- // Turn FP abs into fabs/fabsf.
- StringRef Name = SelectFPName(TREE_TYPE(op), "fabsf", "fabs", "fabsl");
- CallInst *Call = EmitSimpleCall(Name, TREE_TYPE(op), op, NULL);
- Call->setDoesNotThrow();
- Call->setDoesNotAccessMemory();
- return Call;
-}
-
-Value *TreeToLLVM::EmitReg_BIT_NOT_EXPR(tree op) {
- Value *Op = EmitRegister(op);
- return Builder.CreateNot(Op, Op->getName()+"not");
-}
-
-Value *TreeToLLVM::EmitReg_CONJ_EXPR(tree op) {
- tree elt_type = TREE_TYPE(TREE_TYPE(op));
- Value *R, *I;
- SplitComplex(EmitRegister(op), R, I, elt_type);
-
- // ~(a+ib) = a + i*-b
- I = CreateAnyNeg(I, elt_type);
-
- return CreateComplex(R, I, elt_type);
-}
-
-Value *TreeToLLVM::EmitReg_CONVERT_EXPR(tree type, tree op) {
- return CastToAnyType(EmitRegister(op), !TYPE_UNSIGNED(TREE_TYPE(op)),
- GetRegType(type), !TYPE_UNSIGNED(type));
-}
-
-Value *TreeToLLVM::EmitReg_NEGATE_EXPR(tree op) {
- Value *V = EmitRegister(op);
- tree type = TREE_TYPE(op);
-
- if (TREE_CODE(type) == COMPLEX_TYPE) {
- tree elt_type = TREE_TYPE(type);
- Value *R, *I; SplitComplex(V, R, I, elt_type);
-
- // -(a+ib) = -a + i*-b
- R = CreateAnyNeg(R, elt_type);
- I = CreateAnyNeg(I, elt_type);
-
- return CreateComplex(R, I, elt_type);
- }
-
- return CreateAnyNeg(V, type);
-}
-
-Value *TreeToLLVM::EmitReg_PAREN_EXPR(tree op) {
- // TODO: Understand and correctly deal with this subtle expression.
- return EmitRegister(op);
-}
-
-Value *TreeToLLVM::EmitReg_TRUTH_NOT_EXPR(tree type, tree op) {
- Value *V = EmitRegister(op);
- if (!V->getType()->isIntegerTy(1))
- V = Builder.CreateICmpNE(V,
- Constant::getNullValue(V->getType()), "toBool");
- V = Builder.CreateNot(V, V->getName()+"not");
- return Builder.CreateIntCast(V, GetRegType(type), /*isSigned*/false);
-}
-
-// Comparisons.
-
-/// EmitCompare - Compare LHS with RHS using the appropriate comparison code.
-/// The result is an i1 boolean.
-Value *TreeToLLVM::EmitCompare(tree lhs, tree rhs, unsigned code) {
- Value *LHS = EmitRegister(lhs);
- Value *RHS = UselesslyTypeConvert(EmitRegister(rhs), LHS->getType());
-
- // Compute the LLVM opcodes corresponding to the GCC comparison.
- CmpInst::Predicate UIPred = CmpInst::BAD_ICMP_PREDICATE;
- CmpInst::Predicate SIPred = CmpInst::BAD_ICMP_PREDICATE;
- CmpInst::Predicate FPPred = CmpInst::BAD_FCMP_PREDICATE;
-
- switch (code) {
- default:
- assert(false && "Unhandled condition code!");
- case LT_EXPR:
- UIPred = CmpInst::ICMP_ULT;
- SIPred = CmpInst::ICMP_SLT;
- FPPred = CmpInst::FCMP_OLT;
- break;
- case LE_EXPR:
- UIPred = CmpInst::ICMP_ULE;
- SIPred = CmpInst::ICMP_SLE;
- FPPred = CmpInst::FCMP_OLE;
- break;
- case GT_EXPR:
- UIPred = CmpInst::ICMP_UGT;
- SIPred = CmpInst::ICMP_SGT;
- FPPred = CmpInst::FCMP_OGT;
- break;
- case GE_EXPR:
- UIPred = CmpInst::ICMP_UGE;
- SIPred = CmpInst::ICMP_SGE;
- FPPred = CmpInst::FCMP_OGE;
- break;
- case EQ_EXPR:
- UIPred = SIPred = CmpInst::ICMP_EQ;
- FPPred = CmpInst::FCMP_OEQ;
- break;
- case NE_EXPR:
- UIPred = SIPred = CmpInst::ICMP_NE;
- FPPred = CmpInst::FCMP_UNE;
- break;
- case UNORDERED_EXPR: FPPred = CmpInst::FCMP_UNO; break;
- case ORDERED_EXPR: FPPred = CmpInst::FCMP_ORD; break;
- case UNLT_EXPR: FPPred = CmpInst::FCMP_ULT; break;
- case UNLE_EXPR: FPPred = CmpInst::FCMP_ULE; break;
- case UNGT_EXPR: FPPred = CmpInst::FCMP_UGT; break;
- case UNGE_EXPR: FPPred = CmpInst::FCMP_UGE; break;
- case UNEQ_EXPR: FPPred = CmpInst::FCMP_UEQ; break;
- case LTGT_EXPR: FPPred = CmpInst::FCMP_ONE; break;
- }
-
- if (TREE_CODE(TREE_TYPE(lhs)) == COMPLEX_TYPE) {
- Value *LHSr, *LHSi;
- SplitComplex(LHS, LHSr, LHSi, TREE_TYPE(TREE_TYPE(lhs)));
- Value *RHSr, *RHSi;
- SplitComplex(RHS, RHSr, RHSi, TREE_TYPE(TREE_TYPE(lhs)));
-
- Value *DSTr, *DSTi;
- if (LHSr->getType()->isFloatingPointTy()) {
- DSTr = Builder.CreateFCmp(FPPred, LHSr, RHSr);
- DSTi = Builder.CreateFCmp(FPPred, LHSi, RHSi);
- if (FPPred == CmpInst::FCMP_OEQ)
- return Builder.CreateAnd(DSTr, DSTi);
- assert(FPPred == CmpInst::FCMP_UNE && "Unhandled complex comparison!");
- return Builder.CreateOr(DSTr, DSTi);
- }
-
- assert(SIPred == UIPred && "(In)equality comparison depends on sign!");
- DSTr = Builder.CreateICmp(UIPred, LHSr, RHSr);
- DSTi = Builder.CreateICmp(UIPred, LHSi, RHSi);
- if (UIPred == CmpInst::ICMP_EQ)
- return Builder.CreateAnd(DSTr, DSTi);
- assert(UIPred == CmpInst::ICMP_NE && "Unhandled complex comparison!");
- return Builder.CreateOr(DSTr, DSTi);
- }
-
- if (LHS->getType()->isFPOrFPVectorTy())
- return Builder.CreateFCmp(FPPred, LHS, RHS);
-
- // Determine which predicate to use based on signedness.
- CmpInst::Predicate pred = TYPE_UNSIGNED(TREE_TYPE(lhs)) ? UIPred : SIPred;
- return Builder.CreateICmp(pred, LHS, RHS);
-}
-
-Value *TreeToLLVM::EmitReg_MinMaxExpr(tree type, tree op0, tree op1,
- unsigned UIPred, unsigned SIPred,
- unsigned FPPred, bool isMax) {
- Value *LHS = EmitRegister(op0);
- Value *RHS = EmitRegister(op1);
-
- const Type *Ty = GetRegType(type);
-
- // The LHS, RHS and Ty could be integer, floating or pointer typed. We need
- // to convert the LHS and RHS into the destination type before doing the
- // comparison. Use CastInst::getCastOpcode to get this right.
- bool TyIsSigned = !TYPE_UNSIGNED(type);
- bool LHSIsSigned = !TYPE_UNSIGNED(TREE_TYPE(op0));
- bool RHSIsSigned = !TYPE_UNSIGNED(TREE_TYPE(op1));
- Instruction::CastOps opcode =
- CastInst::getCastOpcode(LHS, LHSIsSigned, Ty, TyIsSigned);
- LHS = Builder.CreateCast(opcode, LHS, Ty);
- opcode = CastInst::getCastOpcode(RHS, RHSIsSigned, Ty, TyIsSigned);
- RHS = Builder.CreateCast(opcode, RHS, Ty);
-
- Value *Compare;
- if (LHS->getType()->isFloatingPointTy())
- Compare = Builder.CreateFCmp(FCmpInst::Predicate(FPPred), LHS, RHS);
- else if (TYPE_UNSIGNED(type))
- Compare = Builder.CreateICmp(ICmpInst::Predicate(UIPred), LHS, RHS);
- else
- Compare = Builder.CreateICmp(ICmpInst::Predicate(SIPred), LHS, RHS);
-
- return Builder.CreateSelect(Compare, LHS, RHS, isMax ? "max" : "min");
-}
-
-Value *TreeToLLVM::EmitReg_RotateOp(tree type, tree op0, tree op1,
- unsigned Opc1, unsigned Opc2) {
- Value *In = EmitRegister(op0);
- Value *Amt = EmitRegister(op1);
-
- if (Amt->getType() != In->getType())
- Amt = Builder.CreateIntCast(Amt, In->getType(), /*isSigned*/false,
- Amt->getName()+".cast");
-
- Value *TypeSize =
- ConstantInt::get(In->getType(),
- In->getType()->getPrimitiveSizeInBits());
-
- // Do the two shifts.
- Value *V1 = Builder.CreateBinOp((Instruction::BinaryOps)Opc1, In, Amt);
- Value *OtherShift = Builder.CreateSub(TypeSize, Amt);
- Value *V2 = Builder.CreateBinOp((Instruction::BinaryOps)Opc2, In, OtherShift);
-
- // Or the two together to return them.
- Value *Merge = Builder.CreateOr(V1, V2);
- return Builder.CreateIntCast(Merge, GetRegType(type), /*isSigned*/false);
-}
-
-Value *TreeToLLVM::EmitReg_ShiftOp(tree op0, tree op1, unsigned Opc) {
- Value *LHS = EmitRegister(op0);
- Value *RHS = EmitRegister(op1);
- if (RHS->getType() != LHS->getType())
- RHS = Builder.CreateIntCast(RHS, LHS->getType(), /*isSigned*/false,
- RHS->getName()+".cast");
-
- return Builder.CreateBinOp((Instruction::BinaryOps)Opc, LHS, RHS);
-}
-
-Value *TreeToLLVM::EmitReg_TruthOp(tree type, tree op0, tree op1, unsigned Opc){
- Value *LHS = EmitRegister(op0);
- Value *RHS = EmitRegister(op1);
-
- // This is a truth operation like the strict &&,||,^^. Convert to bool as
- // a test against zero
- LHS = Builder.CreateICmpNE(LHS,
- Constant::getNullValue(LHS->getType()),
- "toBool");
- RHS = Builder.CreateICmpNE(RHS,
- Constant::getNullValue(RHS->getType()),
- "toBool");
-
- Value *Res = Builder.CreateBinOp((Instruction::BinaryOps)Opc, LHS, RHS);
- return Builder.CreateZExt(Res, GetRegType(type));
-}
-
-Value *TreeToLLVM::EmitReg_CEIL_DIV_EXPR(tree type, tree op0, tree op1) {
- // Notation: CEIL_DIV_EXPR <-> CDiv, TRUNC_DIV_EXPR <-> Div.
-
- // CDiv calculates LHS/RHS by rounding up to the nearest integer. In terms
- // of Div this means if the values of LHS and RHS have opposite signs or if
- // LHS is zero, then CDiv necessarily equals Div; and
- // LHS CDiv RHS = (LHS - Sign(RHS)) Div RHS + 1
- // otherwise.
-
- const Type *Ty = GetRegType(type);
- Constant *Zero = ConstantInt::get(Ty, 0);
- Constant *One = ConstantInt::get(Ty, 1);
- Constant *MinusOne = Constant::getAllOnesValue(Ty);
-
- Value *LHS = EmitRegister(op0);
- Value *RHS = EmitRegister(op1);
-
- if (!TYPE_UNSIGNED(type)) {
- // In the case of signed arithmetic, we calculate CDiv as follows:
- // LHS CDiv RHS = (LHS - Sign(RHS) * Offset) Div RHS + Offset,
- // where Offset is 1 if LHS and RHS have the same sign and LHS is
- // not zero, and 0 otherwise.
-
- // On some machines INT_MIN Div -1 traps. You might expect a trap for
- // INT_MIN CDiv -1 too, but this implementation will not generate one.
- // Quick quiz question: what value is returned for INT_MIN CDiv -1?
-
- // Determine the signs of LHS and RHS, and whether they have the same sign.
- Value *LHSIsPositive = Builder.CreateICmpSGE(LHS, Zero);
- Value *RHSIsPositive = Builder.CreateICmpSGE(RHS, Zero);
- Value *HaveSameSign = Builder.CreateICmpEQ(LHSIsPositive, RHSIsPositive);
-
- // Offset equals 1 if LHS and RHS have the same sign and LHS is not zero.
- Value *LHSNotZero = Builder.CreateICmpNE(LHS, Zero);
- Value *OffsetOne = Builder.CreateAnd(HaveSameSign, LHSNotZero);
- // ... otherwise it is 0.
- Value *Offset = Builder.CreateSelect(OffsetOne, One, Zero);
-
- // Calculate Sign(RHS) ...
- Value *SignRHS = Builder.CreateSelect(RHSIsPositive, One, MinusOne);
- // ... and Sign(RHS) * Offset
- Value *SignedOffset = Builder.CreateSExt(OffsetOne, Ty);
- SignedOffset = Builder.CreateAnd(SignRHS, SignedOffset);
-
- // Return CDiv = (LHS - Sign(RHS) * Offset) Div RHS + Offset.
- Value *CDiv = Builder.CreateSub(LHS, SignedOffset);
- CDiv = Builder.CreateSDiv(CDiv, RHS);
- return Builder.CreateAdd(CDiv, Offset, "cdiv");
- }
-
- // In the case of unsigned arithmetic, LHS and RHS necessarily have the
- // same sign, so we can use
- // LHS CDiv RHS = (LHS - 1) Div RHS + 1
- // as long as LHS is non-zero.
-
- // Offset is 1 if LHS is non-zero, 0 otherwise.
- Value *LHSNotZero = Builder.CreateICmpNE(LHS, Zero);
- Value *Offset = Builder.CreateSelect(LHSNotZero, One, Zero);
-
- // Return CDiv = (LHS - Offset) Div RHS + Offset.
- Value *CDiv = Builder.CreateSub(LHS, Offset);
- CDiv = Builder.CreateUDiv(CDiv, RHS);
- return Builder.CreateAdd(CDiv, Offset, "cdiv");
-}
-
-Value *TreeToLLVM::EmitReg_BIT_AND_EXPR(tree op0, tree op1) {
- return Builder.CreateAnd(EmitRegister(op0), EmitRegister(op1));
-}
-
-Value *TreeToLLVM::EmitReg_BIT_IOR_EXPR(tree op0, tree op1) {
- return Builder.CreateOr(EmitRegister(op0), EmitRegister(op1));
-}
-
-Value *TreeToLLVM::EmitReg_BIT_XOR_EXPR(tree op0, tree op1) {
- return Builder.CreateXor(EmitRegister(op0), EmitRegister(op1));
-}
-
-Value *TreeToLLVM::EmitReg_COMPLEX_EXPR(tree op0, tree op1) {
- return CreateComplex(EmitRegister(op0), EmitRegister(op1), TREE_TYPE(op1));
-}
-
-Value *TreeToLLVM::EmitReg_FLOOR_DIV_EXPR(tree type, tree op0, tree op1) {
- // Notation: FLOOR_DIV_EXPR <-> FDiv, TRUNC_DIV_EXPR <-> Div.
- Value *LHS = EmitRegister(op0);
- Value *RHS = EmitRegister(op1);
-
- // FDiv calculates LHS/RHS by rounding down to the nearest integer. In terms
- // of Div this means if the values of LHS and RHS have the same sign or if LHS
- // is zero, then FDiv necessarily equals Div; and
- // LHS FDiv RHS = (LHS + Sign(RHS)) Div RHS - 1
- // otherwise.
-
- if (TYPE_UNSIGNED(type))
- // In the case of unsigned arithmetic, LHS and RHS necessarily have the
- // same sign, so FDiv is the same as Div.
- return Builder.CreateUDiv(LHS, RHS, "fdiv");
-
- const Type *Ty = GetRegType(type);
- Constant *Zero = ConstantInt::get(Ty, 0);
- Constant *One = ConstantInt::get(Ty, 1);
- Constant *MinusOne = Constant::getAllOnesValue(Ty);
-
- // In the case of signed arithmetic, we calculate FDiv as follows:
- // LHS FDiv RHS = (LHS + Sign(RHS) * Offset) Div RHS - Offset,
- // where Offset is 1 if LHS and RHS have opposite signs and LHS is
- // not zero, and 0 otherwise.
-
- // Determine the signs of LHS and RHS, and whether they have the same sign.
- Value *LHSIsPositive = Builder.CreateICmpSGE(LHS, Zero);
- Value *RHSIsPositive = Builder.CreateICmpSGE(RHS, Zero);
- Value *SignsDiffer = Builder.CreateICmpNE(LHSIsPositive, RHSIsPositive);
-
- // Offset equals 1 if LHS and RHS have opposite signs and LHS is not zero.
- Value *LHSNotZero = Builder.CreateICmpNE(LHS, Zero);
- Value *OffsetOne = Builder.CreateAnd(SignsDiffer, LHSNotZero);
- // ... otherwise it is 0.
- Value *Offset = Builder.CreateSelect(OffsetOne, One, Zero);
-
- // Calculate Sign(RHS) ...
- Value *SignRHS = Builder.CreateSelect(RHSIsPositive, One, MinusOne);
- // ... and Sign(RHS) * Offset
- Value *SignedOffset = Builder.CreateSExt(OffsetOne, Ty);
- SignedOffset = Builder.CreateAnd(SignRHS, SignedOffset);
-
- // Return FDiv = (LHS + Sign(RHS) * Offset) Div RHS - Offset.
- Value *FDiv = Builder.CreateAdd(LHS, SignedOffset);
- FDiv = Builder.CreateSDiv(FDiv, RHS);
- return Builder.CreateSub(FDiv, Offset, "fdiv");
-}
-
-Value *TreeToLLVM::EmitReg_FLOOR_MOD_EXPR(tree type, tree op0, tree op1) {
- // Notation: FLOOR_MOD_EXPR <-> Mod, TRUNC_MOD_EXPR <-> Rem.
-
- Value *LHS = EmitRegister(op0);
- Value *RHS = EmitRegister(op1);
-
- // We express Mod in terms of Rem as follows: if RHS exactly divides LHS,
- // or the values of LHS and RHS have the same sign, then Mod equals Rem.
- // Otherwise Mod equals Rem + RHS. This means that LHS Mod RHS traps iff
- // LHS Rem RHS traps.
- if (TYPE_UNSIGNED(type))
- // LHS and RHS values must have the same sign if their type is unsigned.
- return Builder.CreateURem(LHS, RHS);
-
- const Type *Ty = GetRegType(type);
- Constant *Zero = ConstantInt::get(Ty, 0);
-
- // The two possible values for Mod.
- Value *Rem = Builder.CreateSRem(LHS, RHS, "rem");
- Value *RemPlusRHS = Builder.CreateAdd(Rem, RHS);
-
- // HaveSameSign: (LHS >= 0) == (RHS >= 0).
- Value *LHSIsPositive = Builder.CreateICmpSGE(LHS, Zero);
- Value *RHSIsPositive = Builder.CreateICmpSGE(RHS, Zero);
- Value *HaveSameSign = Builder.CreateICmpEQ(LHSIsPositive,RHSIsPositive);
-
- // RHS exactly divides LHS iff Rem is zero.
- Value *RemIsZero = Builder.CreateICmpEQ(Rem, Zero);
-
- Value *SameAsRem = Builder.CreateOr(HaveSameSign, RemIsZero);
- return Builder.CreateSelect(SameAsRem, Rem, RemPlusRHS, "mod");
-}
-
-Value *TreeToLLVM::EmitReg_MINUS_EXPR(tree op0, tree op1) {
- Value *LHS = EmitRegister(op0);
- Value *RHS = EmitRegister(op1);
- tree type = TREE_TYPE(op0);
-
- if (TREE_CODE(type) == COMPLEX_TYPE) {
- tree elt_type = TREE_TYPE(type);
- Value *LHSr, *LHSi; SplitComplex(LHS, LHSr, LHSi, elt_type);
- Value *RHSr, *RHSi; SplitComplex(RHS, RHSr, RHSi, elt_type);
-
- // (a+ib) - (c+id) = (a-c) + i(b-d)
- LHSr = CreateAnySub(LHSr, RHSr, elt_type);
- LHSi = CreateAnySub(LHSi, RHSi, elt_type);
-
- return CreateComplex(LHSr, LHSi, elt_type);
- }
-
- return CreateAnySub(LHS, RHS, type);
-}
-
-Value *TreeToLLVM::EmitReg_MULT_EXPR(tree op0, tree op1) {
- Value *LHS = EmitRegister(op0);
- Value *RHS = EmitRegister(op1);
- tree type = TREE_TYPE(op0);
-
- if (TREE_CODE(type) == COMPLEX_TYPE) {
- tree elt_type = TREE_TYPE(type);
- Value *LHSr, *LHSi; SplitComplex(LHS, LHSr, LHSi, elt_type);
- Value *RHSr, *RHSi; SplitComplex(RHS, RHSr, RHSi, elt_type);
- Value *DSTr, *DSTi;
-
- // (a+ib) * (c+id) = (ac-bd) + i(ad+cb)
- if (SCALAR_FLOAT_TYPE_P(elt_type)) {
- Value *Tmp1 = Builder.CreateFMul(LHSr, RHSr); // a*c
- Value *Tmp2 = Builder.CreateFMul(LHSi, RHSi); // b*d
- DSTr = Builder.CreateFSub(Tmp1, Tmp2); // ac-bd
-
- Value *Tmp3 = Builder.CreateFMul(LHSr, RHSi); // a*d
- Value *Tmp4 = Builder.CreateFMul(RHSr, LHSi); // c*b
- DSTi = Builder.CreateFAdd(Tmp3, Tmp4); // ad+cb
- } else {
- // If overflow does not wrap in the element type then it is tempting to
- // use NSW operations here. However that would be wrong since overflow
- // of an intermediate value calculated here does not necessarily imply
- // that the final result overflows.
- Value *Tmp1 = Builder.CreateMul(LHSr, RHSr); // a*c
- Value *Tmp2 = Builder.CreateMul(LHSi, RHSi); // b*d
- DSTr = Builder.CreateSub(Tmp1, Tmp2); // ac-bd
-
- Value *Tmp3 = Builder.CreateMul(LHSr, RHSi); // a*d
- Value *Tmp4 = Builder.CreateMul(RHSr, LHSi); // c*b
- DSTi = Builder.CreateAdd(Tmp3, Tmp4); // ad+cb
- }
-
- return CreateComplex(DSTr, DSTi, elt_type);
- }
-
- return CreateAnyMul(LHS, RHS, type);
-}
-
-Value *TreeToLLVM::EmitReg_PLUS_EXPR(tree op0, tree op1) {
- Value *LHS = EmitRegister(op0);
- Value *RHS = EmitRegister(op1);
- tree type = TREE_TYPE(op0);
-
- if (TREE_CODE(type) == COMPLEX_TYPE) {
- tree elt_type = TREE_TYPE(type);
- Value *LHSr, *LHSi; SplitComplex(LHS, LHSr, LHSi, elt_type);
- Value *RHSr, *RHSi; SplitComplex(RHS, RHSr, RHSi, elt_type);
-
- // (a+ib) + (c+id) = (a+c) + i(b+d)
- LHSr = CreateAnyAdd(LHSr, RHSr, elt_type);
- LHSi = CreateAnyAdd(LHSi, RHSi, elt_type);
-
- return CreateComplex(LHSr, LHSi, elt_type);
- }
-
- return CreateAnyAdd(LHS, RHS, type);
-}
-
-Value *TreeToLLVM::EmitReg_POINTER_PLUS_EXPR(tree type, tree op0, tree op1) {
- Value *Ptr = EmitRegister(op0); // The pointer.
- Value *Idx = EmitRegister(op1); // The offset in bytes.
-
- // Convert the pointer into an i8* and add the offset to it.
- Ptr = Builder.CreateBitCast(Ptr, Type::getInt8PtrTy(Context));
- Value *GEP = POINTER_TYPE_OVERFLOW_UNDEFINED ?
- Builder.CreateInBoundsGEP(Ptr, Idx) : Builder.CreateGEP(Ptr, Idx);
-
- // The result may be of a different pointer type.
- return UselesslyTypeConvert(GEP, GetRegType(type));
-}
-
-Value *TreeToLLVM::EmitReg_RDIV_EXPR(tree op0, tree op1) {
- Value *LHS = EmitRegister(op0);
- Value *RHS = EmitRegister(op1);
- tree type = TREE_TYPE(op0);
-
- if (TREE_CODE(type) == COMPLEX_TYPE) {
- tree elt_type = TREE_TYPE(type);
- Value *LHSr, *LHSi; SplitComplex(LHS, LHSr, LHSi, elt_type);
- Value *RHSr, *RHSi; SplitComplex(RHS, RHSr, RHSi, elt_type);
- Value *DSTr, *DSTi;
-
- // (a+ib) / (c+id) = ((ac+bd)/(cc+dd)) + i((bc-ad)/(cc+dd))
- assert (SCALAR_FLOAT_TYPE_P(elt_type) && "RDIV_EXPR not floating point!");
- Value *Tmp1 = Builder.CreateFMul(LHSr, RHSr); // a*c
- Value *Tmp2 = Builder.CreateFMul(LHSi, RHSi); // b*d
- Value *Tmp3 = Builder.CreateFAdd(Tmp1, Tmp2); // ac+bd
-
- Value *Tmp4 = Builder.CreateFMul(RHSr, RHSr); // c*c
- Value *Tmp5 = Builder.CreateFMul(RHSi, RHSi); // d*d
- Value *Tmp6 = Builder.CreateFAdd(Tmp4, Tmp5); // cc+dd
- DSTr = Builder.CreateFDiv(Tmp3, Tmp6);
-
- Value *Tmp7 = Builder.CreateFMul(LHSi, RHSr); // b*c
- Value *Tmp8 = Builder.CreateFMul(LHSr, RHSi); // a*d
- Value *Tmp9 = Builder.CreateFSub(Tmp7, Tmp8); // bc-ad
- DSTi = Builder.CreateFDiv(Tmp9, Tmp6);
-
- return CreateComplex(DSTr, DSTi, elt_type);
- }
-
- assert(FLOAT_TYPE_P(type) && "RDIV_EXPR not floating point!");
- return Builder.CreateFDiv(LHS, RHS);
-}
-
-Value *TreeToLLVM::EmitReg_ROUND_DIV_EXPR(tree type, tree op0, tree op1) {
- // Notation: ROUND_DIV_EXPR <-> RDiv, TRUNC_DIV_EXPR <-> Div.
-
- // RDiv calculates LHS/RHS by rounding to the nearest integer. Ties
- // are broken by rounding away from zero. In terms of Div this means:
- // LHS RDiv RHS = (LHS + (RHS Div 2)) Div RHS
- // if the values of LHS and RHS have the same sign; and
- // LHS RDiv RHS = (LHS - (RHS Div 2)) Div RHS
- // if the values of LHS and RHS differ in sign. The intermediate
- // expressions in these formulae can overflow, so some tweaking is
- // required to ensure correct results. The details depend on whether
- // we are doing signed or unsigned arithmetic.
-
- const Type *Ty = GetRegType(type);
- Constant *Zero = ConstantInt::get(Ty, 0);
- Constant *Two = ConstantInt::get(Ty, 2);
-
- Value *LHS = EmitRegister(op0);
- Value *RHS = EmitRegister(op1);
-
- if (!TYPE_UNSIGNED(type)) {
- // In the case of signed arithmetic, we calculate RDiv as follows:
- // LHS RDiv RHS = (sign) ( (|LHS| + (|RHS| UDiv 2)) UDiv |RHS| ),
- // where sign is +1 if LHS and RHS have the same sign, -1 if their
- // signs differ. Doing the computation unsigned ensures that there
- // is no overflow.
-
- // On some machines INT_MIN Div -1 traps. You might expect a trap for
- // INT_MIN RDiv -1 too, but this implementation will not generate one.
- // Quick quiz question: what value is returned for INT_MIN RDiv -1?
-
- // Determine the signs of LHS and RHS, and whether they have the same sign.
- Value *LHSIsPositive = Builder.CreateICmpSGE(LHS, Zero);
- Value *RHSIsPositive = Builder.CreateICmpSGE(RHS, Zero);
- Value *HaveSameSign = Builder.CreateICmpEQ(LHSIsPositive, RHSIsPositive);
-
- // Calculate |LHS| ...
- Value *MinusLHS = Builder.CreateNeg(LHS);
- Value *AbsLHS = Builder.CreateSelect(LHSIsPositive, LHS, MinusLHS,
- LHS->getName()+".abs");
- // ... and |RHS|
- Value *MinusRHS = Builder.CreateNeg(RHS);
- Value *AbsRHS = Builder.CreateSelect(RHSIsPositive, RHS, MinusRHS,
- RHS->getName()+".abs");
-
- // Calculate AbsRDiv = (|LHS| + (|RHS| UDiv 2)) UDiv |RHS|.
- Value *HalfAbsRHS = Builder.CreateUDiv(AbsRHS, Two);
- Value *Numerator = Builder.CreateAdd(AbsLHS, HalfAbsRHS);
- Value *AbsRDiv = Builder.CreateUDiv(Numerator, AbsRHS);
-
- // Return AbsRDiv or -AbsRDiv according to whether LHS and RHS have the
- // same sign or not.
- Value *MinusAbsRDiv = Builder.CreateNeg(AbsRDiv);
- return Builder.CreateSelect(HaveSameSign, AbsRDiv, MinusAbsRDiv, "rdiv");
- }
-
- // In the case of unsigned arithmetic, LHS and RHS necessarily have the
- // same sign, however overflow is a problem. We want to use the formula
- // LHS RDiv RHS = (LHS + (RHS Div 2)) Div RHS,
- // but if LHS + (RHS Div 2) overflows then we get the wrong result. Since
- // the use of a conditional branch seems to be unavoidable, we choose the
- // simple solution of explicitly checking for overflow, and using
- // LHS RDiv RHS = ((LHS + (RHS Div 2)) - RHS) Div RHS + 1
- // if it occurred.
-
- // Usually the numerator is LHS + (RHS Div 2); calculate this.
- Value *HalfRHS = Builder.CreateUDiv(RHS, Two);
- Value *Numerator = Builder.CreateAdd(LHS, HalfRHS);
-
- // Did the calculation overflow?
- Value *Overflowed = Builder.CreateICmpULT(Numerator, HalfRHS);
-
- // If so, use (LHS + (RHS Div 2)) - RHS for the numerator instead.
- Value *AltNumerator = Builder.CreateSub(Numerator, RHS);
- Numerator = Builder.CreateSelect(Overflowed, AltNumerator, Numerator);
-
- // Quotient = Numerator / RHS.
- Value *Quotient = Builder.CreateUDiv(Numerator, RHS);
-
- // Return Quotient unless we overflowed, in which case return Quotient + 1.
- return Builder.CreateAdd(Quotient, Builder.CreateIntCast(Overflowed, Ty,
- /*isSigned*/false),
- "rdiv");
-}
-
-Value *TreeToLLVM::EmitReg_TRUNC_DIV_EXPR(tree op0, tree op1, bool isExact) {
- Value *LHS = EmitRegister(op0);
- Value *RHS = EmitRegister(op1);
- tree type = TREE_TYPE(op0);
-
- if (TREE_CODE(type) == COMPLEX_TYPE) {
- tree elt_type = TREE_TYPE(type);
- Value *LHSr, *LHSi; SplitComplex(LHS, LHSr, LHSi, elt_type);
- Value *RHSr, *RHSi; SplitComplex(RHS, RHSr, RHSi, elt_type);
- Value *DSTr, *DSTi;
-
- // (a+ib) / (c+id) = ((ac+bd)/(cc+dd)) + i((bc-ad)/(cc+dd))
- assert (LHSr->getType()->isIntegerTy() && "TRUNC_DIV_EXPR not integer!");
- // If overflow does not wrap in the element type then it is tempting to
- // use NSW operations here. However that would be wrong since overflow
- // of an intermediate value calculated here does not necessarily imply
- // that the final result overflows.
- Value *Tmp1 = Builder.CreateMul(LHSr, RHSr); // a*c
- Value *Tmp2 = Builder.CreateMul(LHSi, RHSi); // b*d
- Value *Tmp3 = Builder.CreateAdd(Tmp1, Tmp2); // ac+bd
-
- Value *Tmp4 = Builder.CreateMul(RHSr, RHSr); // c*c
- Value *Tmp5 = Builder.CreateMul(RHSi, RHSi); // d*d
- Value *Tmp6 = Builder.CreateAdd(Tmp4, Tmp5); // cc+dd
- DSTr = TYPE_UNSIGNED(elt_type) ?
- Builder.CreateUDiv(Tmp3, Tmp6) : Builder.CreateSDiv(Tmp3, Tmp6);
-
- Value *Tmp7 = Builder.CreateMul(LHSi, RHSr); // b*c
- Value *Tmp8 = Builder.CreateMul(LHSr, RHSi); // a*d
- Value *Tmp9 = Builder.CreateSub(Tmp7, Tmp8); // bc-ad
- DSTi = TYPE_UNSIGNED(elt_type) ?
- Builder.CreateUDiv(Tmp9, Tmp6) : Builder.CreateSDiv(Tmp9, Tmp6);
-
- return CreateComplex(DSTr, DSTi, elt_type);
- }
-
- assert(LHS->getType()->isIntOrIntVectorTy() && "TRUNC_DIV_EXPR not integer!");
- if (TYPE_UNSIGNED(type))
- return Builder.CreateUDiv(LHS, RHS, "", isExact);
- else
- return Builder.CreateSDiv(LHS, RHS, "", isExact);
-}
-
-Value *TreeToLLVM::EmitReg_TRUNC_MOD_EXPR(tree op0, tree op1) {
- Value *LHS = EmitRegister(op0);
- Value *RHS = EmitRegister(op1);
- return TYPE_UNSIGNED(TREE_TYPE(op0)) ?
- Builder.CreateURem(LHS, RHS) : Builder.CreateSRem(LHS, RHS);
-}
-
-
-//===----------------------------------------------------------------------===//
-// ... Exception Handling ...
-//===----------------------------------------------------------------------===//
-
-
-
-//===----------------------------------------------------------------------===//
-// ... Render* - Convert GIMPLE to LLVM ...
-//===----------------------------------------------------------------------===//
-
-void TreeToLLVM::RenderGIMPLE_ASM(gimple stmt) {
- // A gimple asm statement consists of an asm string, a list of outputs, a list
- // of inputs, a list of clobbers, a list of labels and a "volatile" flag.
- // These correspond directly to the elements of an asm statement. For example
- // asm ("combine %2,%0" : "=r" (x) : "0" (x), "g" (y));
- // Here the asm string is "combine %2,%0" and can be obtained as a const char*
- // by calling gimple_asm_string. The only output is "=r" (x). The number of
- // outputs is given by gimple_asm_noutputs, 1 in this case, and the outputs
- // themselves can be obtained by calling gimple_asm_output_op. This returns a
- // TREE_LIST node with an SSA name for "x" as the TREE_VALUE; the TREE_PURPOSE
- // is also a TREE_LIST with TREE_VALUE a string constant holding "=r". There
- // are two inputs, "0" (x) and "g" (y), so gimple_asm_ninputs returns 2. The
- // routine gimple_asm_input_op returns them in the same format as for outputs.
- // The number of clobbers is returned by gimple_asm_nclobbers, 0 in this case.
- // To get the clobbers use gimple_asm_clobber_op. This returns a TREE_LIST
- // node with TREE_VALUE a string constant holding the clobber. To find out if
- // the asm is volatile call gimple_asm_volatile_p, which returns true if so.
- // See below for labels (this example does not have any).
-
- // Note that symbolic names have been substituted before getting here. For
- // example this
- // asm ("cmoveq %1,%2,%[result]" : [result] "=r"(result)
- // : "r"(test), "r"(new), "[result]"(old));
- // turns up as
- // asm ("cmoveq %1,%2,%0" : "=r"(result) : "r"(test), "r"(new), "0"(old));
-
- // Note that clobbers may not turn up in the same order as in the original, eg
- // asm volatile ("movc3 %0,%1,%2" : /* no outputs */
- // : "g" (from), "g" (to), "g" (count)
- // : "r0", "r1", "r2", "r3", "r4", "r5");
- // The clobbers turn up as "r5", "r4", "r3", "r2", "r1", "r0".
-
- // Here is an example of the "asm goto" construct (not yet supported by LLVM):
- // int frob(int x) {
- // int y;
- // asm goto ("frob %%r5, %1; jc %l[error]; mov (%2), %%r5"
- // : : "r"(x), "r"(&y) : "r5", "memory" : error);
- // return y;
- // error:
- // return -1;
- // }
- // The number of labels, one in this case, is returned by gimple_asm_nlabels.
- // The labels themselves are returned by gimple_asm_label_op as a TREE_LIST
- // node with TREE_PURPOSE a string constant holding the label name ("error")
- // and TREE_VALUE holding the appropriate LABEL_DECL.
-
- // TODO: Add support for labels.
- if (gimple_asm_nlabels(stmt) > 0) {
- sorry("'asm goto' not supported");
- return;
- }
-
- const unsigned NumOutputs = gimple_asm_noutputs (stmt);
- const unsigned NumInputs = gimple_asm_ninputs(stmt);
- const unsigned NumClobbers = gimple_asm_nclobbers (stmt);
-
- /// Constraints - The output/input constraints, concatenated together in array
- /// form instead of list form. This way of doing things is forced on us by
- /// GCC routines like parse_output_constraint which rummage around inside the
- /// array.
- const char **Constraints =
- (const char **)alloca((NumOutputs + NumInputs) * sizeof(const char *));
-
- // Initialize the Constraints array.
- for (unsigned i = 0; i != NumOutputs; ++i) {
- tree Output = gimple_asm_output_op(stmt, i);
- // If there's an erroneous arg then bail out.
- if (TREE_TYPE(TREE_VALUE(Output)) == error_mark_node) return;
- // Record the output constraint.
- const char *Constraint =
- TREE_STRING_POINTER(TREE_VALUE(TREE_PURPOSE(Output)));
- Constraints[i] = Constraint;
- }
- for (unsigned i = 0; i != NumInputs; ++i) {
- tree Input = gimple_asm_input_op(stmt, i);
- // If there's an erroneous arg then bail out.
- if (TREE_TYPE(TREE_VALUE(Input)) == error_mark_node) return;
- // Record the input constraint.
- const char *Constraint =
- TREE_STRING_POINTER(TREE_VALUE(TREE_PURPOSE(Input)));
- Constraints[NumOutputs+i] = Constraint;
- }
-
- // Look for multiple alternative constraints: multiple alternatives separated
- // by commas.
- unsigned NumChoices = 0; // sentinal; real value is always at least 1.
- for (unsigned i = 0; i != NumInputs; ++i) {
- tree Input = gimple_asm_input_op(stmt, i);
- unsigned NumInputChoices = 1;
- for (const char *p = TREE_STRING_POINTER(TREE_VALUE(TREE_PURPOSE(Input)));
- *p; ++p)
- if (*p == ',')
- ++NumInputChoices;
- if (NumChoices && (NumInputChoices != NumChoices)) {
- error_at(gimple_location(stmt), "operand constraints for %<asm%> differ "
- "in number of alternatives");
- return;
- }
- if (NumChoices == 0)
- NumChoices = NumInputChoices;
- }
- for (unsigned i = 0; i != NumOutputs; ++i) {
- tree Output = gimple_asm_output_op(stmt, i);
- unsigned NumOutputChoices = 1;
- for (const char *p = TREE_STRING_POINTER(TREE_VALUE(TREE_PURPOSE(Output)));
- *p; ++p)
- if (*p == ',')
- ++NumOutputChoices;
- if (NumChoices && (NumOutputChoices != NumChoices)) {
- error_at(gimple_location(stmt), "operand constraints for %<asm%> differ "
- "in number of alternatives");
- return;
- }
- if (NumChoices == 0)
- NumChoices = NumOutputChoices;
- }
-
- // If there are multiple constraint tuples, pick one. Constraints is
- // altered to point to shorter strings (which are malloc'ed), and everything
- // below Just Works as in the NumChoices==1 case.
- BumpPtrAllocator StringStorage(256, 256);
- if (NumChoices > 1)
- ChooseConstraintTuple(stmt, Constraints, NumChoices, StringStorage);
-
- // HasSideEffects - Whether the LLVM inline asm should be marked as having
- // side effects.
- bool HasSideEffects = gimple_asm_volatile_p(stmt) || (NumOutputs == 0);
-
- // CallResultTypes - The inline asm call may return one or more results. The
- // types of the results are recorded here along with a flag indicating whether
- // the corresponding GCC type is signed.
- SmallVector<std::pair<const Type *, bool>, 4> CallResultTypes;
-
- // CallResultDests - Each result returned by the inline asm call is stored in
- // a memory location. These are listed here along with a flag indicating if
- // the GCC type corresponding to the memory location is signed. The type of
- // the memory location is allowed to differ from the type of the call result,
- // in which case the result is converted before being stored.
- SmallVector<std::pair<Value *, bool>, 4> CallResultDests;
-
- // CallOps - The operands pass to the inline asm call.
- std::vector<Value*> CallOps;
-
- // OutputLocations - For each output holds an index into CallOps (if the flag
- // is false) or into CallResultTypes (if the flag is true). Outputs returned
- // in memory are passed to the asm as an operand and thus appear in CallOps.
- // Those returned in registers are obtained as one of the results of the asm
- // call and thus correspond to an entry in CallResultTypes.
- SmallVector<std::pair<bool, unsigned>, 4> OutputLocations;
-
- // SSADefinitions - If the asm defines an SSA name then the SSA name and a
- // memory location are recorded here. The asm result defining the SSA name
- // will be stored to the memory memory location, and loaded out afterwards
- // to define the SSA name.
- SmallVector<std::pair<tree, MemRef>, 4> SSADefinitions;
-
- // ConstraintStr - The string of constraints in LLVM format.
- std::string ConstraintStr;
-
- // Process outputs.
- for (unsigned i = 0; i != NumOutputs; ++i) {
- tree Output = gimple_asm_output_op(stmt, i);
- tree Operand = TREE_VALUE(Output);
-
- // Parse the output constraint.
- const char *Constraint = Constraints[i];
- bool IsInOut, AllowsReg, AllowsMem;
- if (!parse_output_constraint(&Constraint, i, NumInputs, NumOutputs,
- &AllowsMem, &AllowsReg, &IsInOut))
- return;
- assert(Constraint[0] == '=' && "Not an output constraint?");
- assert(!IsInOut && "asm expression not gimplified?");
-
- std::string SimplifiedConstraint;
- // If this output register is pinned to a machine register, use that machine
- // register instead of the specified constraint.
- if (TREE_CODE(Operand) == VAR_DECL && DECL_HARD_REGISTER(Operand)) {
- const char* RegName = extractRegisterName(Operand);
- int RegNum = decode_reg_name(RegName);
- if (RegNum >= 0) {
- RegName = LLVM_GET_REG_NAME(RegName, RegNum);
- unsigned RegNameLen = strlen(RegName);
- char *NewConstraint = (char*)alloca(RegNameLen+3);
- NewConstraint[0] = '{';
- memcpy(NewConstraint+1, RegName, RegNameLen);
- NewConstraint[RegNameLen+1] = '}';
- NewConstraint[RegNameLen+2] = 0;
- SimplifiedConstraint = NewConstraint;
- // This output will now be implicit; set the sideffect flag on the asm.
- HasSideEffects = true;
- // We should no longer consider mem constraints.
- AllowsMem = false;
- } else {
- // If we can simplify the constraint into something else, do so now.
- // This avoids LLVM having to know about all the (redundant) GCC
- // constraints.
- SimplifiedConstraint = CanonicalizeConstraint(Constraint+1);
- }
- } else {
- SimplifiedConstraint = CanonicalizeConstraint(Constraint+1);
- }
-
- LValue Dest;
- const Type *DestValTy = ConvertType(TREE_TYPE(Operand));
- if (TREE_CODE(Operand) == SSA_NAME) {
- // The ASM is defining an ssa name. Store the output to a temporary, then
- // load it out again later as the ssa name.
- MemRef TmpLoc = CreateTempLoc(DestValTy);
- SSADefinitions.push_back(std::make_pair(Operand, TmpLoc));
- Dest = LValue(TmpLoc);
- } else {
- Dest = EmitLV(Operand);
- assert(cast<PointerType>(Dest.Ptr->getType())->getElementType() ==
- DestValTy && "LValue has wrong type!");
- }
-
- assert(!Dest.isBitfield() && "Cannot assign into a bitfield!");
- if (!AllowsMem && DestValTy->isSingleValueType()) {// Reg dest -> asm return
- ConstraintStr += ",=";
- ConstraintStr += SimplifiedConstraint;
- bool IsSigned = !TYPE_UNSIGNED(TREE_TYPE(Operand));
- CallResultDests.push_back(std::make_pair(Dest.Ptr, IsSigned));
- CallResultTypes.push_back(std::make_pair(DestValTy, IsSigned));
- OutputLocations.push_back(std::make_pair(true, CallResultTypes.size()-1));
- } else {
- ConstraintStr += ",=*";
- ConstraintStr += SimplifiedConstraint;
- CallOps.push_back(Dest.Ptr);
- OutputLocations.push_back(std::make_pair(false, CallOps.size()-1));
- }
- }
-
- // Process inputs.
- for (unsigned i = 0; i != NumInputs; ++i) {
- tree Input = gimple_asm_input_op(stmt, i);
- tree Val = TREE_VALUE(Input);
- tree type = TREE_TYPE(Val);
- bool IsSigned = !TYPE_UNSIGNED(type);
-
- const char *Constraint = Constraints[NumOutputs+i];
-
- bool AllowsReg, AllowsMem;
- if (!parse_input_constraint(Constraints+NumOutputs+i, i,
- NumInputs, NumOutputs, 0,
- Constraints, &AllowsMem, &AllowsReg))
- return;
- bool isIndirect = false;
- if (AllowsReg || !AllowsMem) { // Register operand.
- const Type *LLVMTy = ConvertType(type);
-
- Value *Op = 0;
- const Type *OpTy = LLVMTy;
- if (LLVMTy->isSingleValueType()) {
- if (TREE_CODE(Val)==ADDR_EXPR &&
- TREE_CODE(TREE_OPERAND(Val,0))==LABEL_DECL) {
- // Emit the label, but do not assume it is going to be the target
- // of an indirect branch. Having this logic here is a hack; there
- // should be a bit in the label identifying it as in an asm.
- Op = getLabelDeclBlock(TREE_OPERAND(Val, 0));
- } else if (TREE_CODE(Val) == VAR_DECL && DECL_HARD_REGISTER(Val)) {
- // GCC special cases hard registers used as inputs to asm statements.
- // Emit an inline asm node that copies the value out of the specified
- // register.
- assert(canEmitRegisterVariable(Val) && "Cannot read hard register!");
- Op = EmitReadOfRegisterVariable(Val);
- } else {
- Op = EmitMemory(Val);
- }
- } else {
- LValue LV = EmitLV(Val);
- assert(!LV.isBitfield() && "Inline asm can't have bitfield operand");
-
- // Small structs and unions can be treated as integers.
- uint64_t TySize = TD.getTypeSizeInBits(LLVMTy);
- if (TySize == 1 || TySize == 8 || TySize == 16 ||
- TySize == 32 || TySize == 64 || (TySize == 128 && !AllowsMem)) {
- LLVMTy = IntegerType::get(Context, TySize);
- Op =
- Builder.CreateLoad(Builder.CreateBitCast(LV.Ptr,
- LLVMTy->getPointerTo()));
- } else {
- // Codegen only supports indirect operands with mem constraints.
- if (!AllowsMem)
- error_at(gimple_location(stmt),
- "aggregate does not match inline asm register constraint");
- // Otherwise, emit our value as a lvalue.
- isIndirect = true;
- Op = LV.Ptr;
- OpTy = Op->getType();
- }
- }
-
- // If this input operand is matching an output operand, e.g. '0', check if
- // this is something that llvm supports. If the operand types are
- // different, then emit an error if 1) one of the types is not integer or
- // pointer, 2) if size of input type is larger than the output type. If
- // the size of the integer input size is smaller than the integer output
- // type, then cast it to the larger type and shift the value if the target
- // is big endian.
- if (ISDIGIT(Constraint[0])) {
- unsigned Match = atoi(Constraint);
- // This output might have gotten put in either CallResult or CallArg
- // depending whether it's a register or not. Find its type.
- const Type *OTy = 0;
- unsigned OutputIndex = ~0U;
- if (Match < OutputLocations.size()) {
- // Indices here known to be within range.
- OutputIndex = OutputLocations[Match].second;
- if (OutputLocations[Match].first)
- OTy = CallResultTypes[OutputIndex].first;
- else {
- OTy = CallOps[OutputIndex]->getType();
- assert(OTy->isPointerTy() && "Expected pointer type!");
- OTy = cast<PointerType>(OTy)->getElementType();
- }
- }
- if (OTy && OTy != OpTy) {
- if (!(OTy->isIntegerTy() || OTy->isPointerTy()) ||
- !(OpTy->isIntegerTy() || OpTy->isPointerTy())) {
- error_at(gimple_location(stmt),
- "unsupported inline asm: input constraint with a matching "
- "output constraint of incompatible type!");
- return;
- }
- unsigned OTyBits = TD.getTypeSizeInBits(OTy);
- unsigned OpTyBits = TD.getTypeSizeInBits(OpTy);
- if (OTyBits == 0 || OpTyBits == 0) {
- error_at(gimple_location(stmt), "unsupported inline asm: input "
- "constraint with a matching output constraint of "
- "incompatible type!");
- return;
- } else if (OTyBits < OpTyBits) {
- // The output is smaller than the input. If the output is not a
- // register then bail out. Likewise, if the output is explicitly
- // mentioned in the asm string then we cannot safely promote it,
- // so bail out in this case too.
- if (!OutputLocations[Match].first ||
- isOperandMentioned(stmt, Match)) {
- error_at(gimple_location(stmt), "unsupported inline asm: input "
- "constraint with a matching output constraint of "
- "incompatible type!");
- return;
- }
- // Use the input type for the output, and arrange for the result to
- // be truncated to the original output type after the asm call.
- CallResultTypes[OutputIndex] = std::make_pair(OpTy, IsSigned);
- } else if (OTyBits > OpTyBits) {
- // The input is smaller than the output. If the input is explicitly
- // mentioned in the asm string then we cannot safely promote it, so
- // bail out.
- if (isOperandMentioned(stmt, NumOutputs + i)) {
- error_at(gimple_location(stmt), "unsupported inline asm: input "
- "constraint with a matching output constraint of "
- "incompatible type!");
- return;
- }
- Op = CastToAnyType(Op, IsSigned, OTy,
- CallResultTypes[OutputIndex].second);
- }
- }
- }
-
- CallOps.push_back(Op);
- } else { // Memory operand.
- mark_addressable(TREE_VALUE(Input));
- isIndirect = true;
- LValue Src = EmitLV(Val);
- assert(!Src.isBitfield() && "Cannot read from a bitfield!");
- CallOps.push_back(Src.Ptr);
- }
-
- ConstraintStr += ',';
- if (isIndirect)
- ConstraintStr += '*';
-
- // If this input register is pinned to a machine register, use that machine
- // register instead of the specified constraint.
- if (TREE_CODE(Val) == VAR_DECL && DECL_HARD_REGISTER(Val)) {
- const char *RegName = extractRegisterName(Val);
- int RegNum = decode_reg_name(RegName);
- if (RegNum >= 0) {
- RegName = LLVM_GET_REG_NAME(RegName, RegNum);
- ConstraintStr += '{';
- ConstraintStr += RegName;
- ConstraintStr += '}';
- continue;
- }
- }
-
- // If there is a simpler form for the register constraint, use it.
- std::string Simplified = CanonicalizeConstraint(Constraint);
- ConstraintStr += Simplified;
- }
-
- // Process clobbers.
-
- // Some targets automatically clobber registers across an asm.
- tree Clobbers;
- {
- // Create input, output & clobber lists for the benefit of md_asm_clobbers.
- tree outputs = NULL_TREE;
- if (NumOutputs) {
- tree t = outputs = gimple_asm_output_op (stmt, 0);
- for (unsigned i = 1; i < NumOutputs; i++) {
- TREE_CHAIN (t) = gimple_asm_output_op (stmt, i);
- t = gimple_asm_output_op (stmt, i);
- }
- }
-
- tree inputs = NULL_TREE;
- if (NumInputs) {
- tree t = inputs = gimple_asm_input_op (stmt, 0);
- for (unsigned i = 1; i < NumInputs; i++) {
- TREE_CHAIN (t) = gimple_asm_input_op (stmt, i);
- t = gimple_asm_input_op (stmt, i);
- }
- }
-
- tree clobbers = NULL_TREE;
- if (NumClobbers) {
- tree t = clobbers = gimple_asm_clobber_op (stmt, 0);
- for (unsigned i = 1; i < NumClobbers; i++) {
- TREE_CHAIN (t) = gimple_asm_clobber_op (stmt, i);
- t = gimple_asm_clobber_op (stmt, i);
- }
- }
-
- Clobbers = targetm.md_asm_clobbers(outputs, inputs, clobbers);
- }
-
- for (; Clobbers; Clobbers = TREE_CHAIN(Clobbers)) {
- const char *RegName = TREE_STRING_POINTER(TREE_VALUE(Clobbers));
- int RegCode = decode_reg_name(RegName);
-
- switch (RegCode) {
- case -1: // Nothing specified?
- case -2: // Invalid.
- error_at(gimple_location(stmt), "unknown register name %qs in %<asm%>",
- RegName);
- return;
- case -3: // cc
- ConstraintStr += ",~{cc}";
- break;
- case -4: // memory
- ConstraintStr += ",~{memory}";
- break;
- default: // Normal register name.
- RegName = LLVM_GET_REG_NAME(RegName, RegCode);
- ConstraintStr += ",~{";
- ConstraintStr += RegName;
- ConstraintStr += "}";
- break;
- }
- }
-
- // Compute the return type to use for the asm call.
- const Type *CallResultType;
- switch (CallResultTypes.size()) {
- // If there are no results then the return type is void!
- case 0: CallResultType = Type::getVoidTy(Context); break;
- // If there is one result then use the result's type as the return type.
- case 1: CallResultType = CallResultTypes[0].first; break;
- // If the asm returns multiple results then create a struct type with the
- // result types as its fields, and use it for the return type.
- default:
- std::vector<const Type*> Fields(CallResultTypes.size());
- for (unsigned i = 0, e = CallResultTypes.size(); i != e; ++i)
- Fields[i] = CallResultTypes[i].first;
- CallResultType = StructType::get(Context, Fields);
- break;
- }
-
- // Compute the types of the arguments to the asm call.
- std::vector<const Type*> CallArgTypes(CallOps.size());
- for (unsigned i = 0, e = CallOps.size(); i != e; ++i)
- CallArgTypes[i] = CallOps[i]->getType();
-
- // Get the type of the called asm "function".
- const FunctionType *FTy =
- FunctionType::get(CallResultType, CallArgTypes, false);
-
- // Remove the leading comma if we have operands.
- if (!ConstraintStr.empty())
- ConstraintStr.erase(ConstraintStr.begin());
-
- // Make sure we're created a valid inline asm expression.
- if (!InlineAsm::Verify(FTy, ConstraintStr)) {
- error_at(gimple_location(stmt), "Invalid or unsupported inline assembly!");
- return;
- }
-
- std::string NewAsmStr = ConvertInlineAsmStr(stmt, NumOutputs+NumInputs);
- Value *Asm = InlineAsm::get(FTy, NewAsmStr, ConstraintStr, HasSideEffects);
- CallInst *CV = Builder.CreateCall(Asm, CallOps.begin(), CallOps.end(),
- CallResultTypes.empty() ? "" : "asmtmp");
- CV->setDoesNotThrow();
-
- // If the call produces a value, store it into the destination.
- for (unsigned i = 0, NumResults = CallResultTypes.size(); i != NumResults;
- ++i) {
- Value *Val = NumResults == 1 ?
- CV : Builder.CreateExtractValue(CV, i, "asmresult");
- bool ValIsSigned = CallResultTypes[i].second;
-
- Value *Dest = CallResultDests[i].first;
- const Type *DestTy = cast<PointerType>(Dest->getType())->getElementType();
- bool DestIsSigned = CallResultDests[i].second;
- Val = CastToAnyType(Val, ValIsSigned, DestTy, DestIsSigned);
- Builder.CreateStore(Val, Dest);
- }
-
- // If the call defined any ssa names, associate them with their value.
- for (unsigned i = 0, e = SSADefinitions.size(); i != e; ++i) {
- tree Name = SSADefinitions[i].first;
- MemRef Loc = SSADefinitions[i].second;
- Value *Val = LoadRegisterFromMemory(Loc, TREE_TYPE(Name), Builder);
- DefineSSAName(Name, Val);
- }
-
- // Give the backend a chance to upgrade the inline asm to LLVM code. This
- // handles some common cases that LLVM has intrinsics for, e.g. x86 bswap ->
- // llvm.bswap.
- if (const TargetLowering *TLI = TheTarget->getTargetLowering())
- TLI->ExpandInlineAsm(CV);
-}
-
-void TreeToLLVM::RenderGIMPLE_ASSIGN(gimple stmt) {
- tree lhs = gimple_assign_lhs(stmt);
- if (AGGREGATE_TYPE_P(TREE_TYPE(lhs))) {
- assert(get_gimple_rhs_class(gimple_expr_code(stmt)) == GIMPLE_SINGLE_RHS &&
- "Aggregate type but rhs not simple!");
- LValue LV = EmitLV(lhs);
- MemRef NewLoc(LV.Ptr, LV.getAlignment(), TREE_THIS_VOLATILE(lhs));
- EmitAggregate(gimple_assign_rhs1 (stmt), NewLoc);
- return;
- }
- WriteScalarToLHS(lhs, EmitAssignRHS(stmt));
-}
-
-void TreeToLLVM::RenderGIMPLE_CALL(gimple stmt) {
- tree lhs = gimple_call_lhs(stmt);
- if (!lhs) {
- // The returned value is not used.
- if (!AGGREGATE_TYPE_P(gimple_call_return_type(stmt))) {
- OutputCallRHS(stmt, 0);
- return;
- }
- // Create a temporary to hold the returned value.
- // TODO: Figure out how to avoid creating this temporary and the
- // associated useless code that stores the returned value into it.
- MemRef Loc = CreateTempLoc(ConvertType(gimple_call_return_type(stmt)));
- OutputCallRHS(stmt, &Loc);
- return;
- }
-
- if (AGGREGATE_TYPE_P(TREE_TYPE(lhs))) {
- LValue LV = EmitLV(lhs);
- MemRef NewLoc(LV.Ptr, LV.getAlignment(), TREE_THIS_VOLATILE(lhs));
- OutputCallRHS(stmt, &NewLoc);
- return;
- }
- WriteScalarToLHS(lhs, OutputCallRHS(stmt, 0));
-}
-
-void TreeToLLVM::RenderGIMPLE_COND(gimple stmt) {
- // Emit the comparison.
- Value *Cond = EmitCompare(gimple_cond_lhs(stmt), gimple_cond_rhs(stmt),
- gimple_cond_code(stmt));
-
- // Extract the target basic blocks.
- edge true_edge, false_edge;
- extract_true_false_edges_from_block(gimple_bb(stmt), &true_edge, &false_edge);
- BasicBlock *IfTrue = getBasicBlock(true_edge->dest);
- BasicBlock *IfFalse = getBasicBlock(false_edge->dest);
-
- // Branch based on the condition.
- Builder.CreateCondBr(Cond, IfTrue, IfFalse);
-}
-
-void TreeToLLVM::RenderGIMPLE_EH_DISPATCH(gimple stmt) {
- int RegionNo = gimple_eh_dispatch_region(stmt);
- eh_region region = get_eh_region_from_number(RegionNo);
-
- switch (region->type) {
- default:
- llvm_unreachable("Unexpected region type!");
- case ERT_ALLOWED_EXCEPTIONS: {
- // Filter.
- BasicBlock *Dest = getLabelDeclBlock(region->u.allowed.label);
-
- if (!region->u.allowed.type_list) {
- // Not allowed to throw. Branch directly to the post landing pad.
- Builder.CreateBr(Dest);
- BeginBlock(BasicBlock::Create(Context));
- break;
- }
-
- // The result of a filter selection will be a negative index if there is a
- // match.
- // FIXME: It looks like you have to compare against a specific value,
- // checking for any old negative number is not enough! This should not
- // matter if the failure code branched to on a filter match is always the
- // same (as in C++), but might cause problems with other languages.
- Value *Filter = Builder.CreateLoad(getExceptionFilter(RegionNo));
-
- // Compare with the filter action value.
- Value *Zero = ConstantInt::get(Filter->getType(), 0);
- Value *Compare = Builder.CreateICmpSLT(Filter, Zero);
-
- // Branch on the compare.
- BasicBlock *NoMatchBB = BasicBlock::Create(Context);
- Builder.CreateCondBr(Compare, Dest, NoMatchBB);
- BeginBlock(NoMatchBB);
- break;
- }
- case ERT_TRY:
- // Catches.
- Value *Filter = NULL;
- SmallSet<Value *, 8> AlreadyCaught; // Typeinfos known caught.
- Function *TypeIDIntr = Intrinsic::getDeclaration(TheModule,
- Intrinsic::eh_typeid_for);
- for (eh_catch c = region->u.eh_try.first_catch; c ; c = c->next_catch) {
- BasicBlock *Dest = getLabelDeclBlock(c->label);
- if (!c->type_list) {
- // Catch-all. Branch directly to the post landing pad.
- Builder.CreateBr(Dest);
- break;
- }
-
- Value *Cond = NULL;
- for (tree type = c->type_list; type; type = TREE_CHAIN (type)) {
- Value *TypeInfo = ConvertTypeInfo(TREE_VALUE(type));
- // No point in trying to catch a typeinfo that was already caught.
- if (!AlreadyCaught.insert(TypeInfo))
- continue;
-
- TypeInfo = Builder.CreateBitCast(TypeInfo, Type::getInt8PtrTy(Context));
-
- // Call get eh type id.
- Value *TypeID = Builder.CreateCall(TypeIDIntr, TypeInfo, "typeid");
-
- if (!Filter)
- Filter = Builder.CreateLoad(getExceptionFilter(RegionNo));
-
- // Compare with the exception selector.
- Value *Compare = Builder.CreateICmpEQ(Filter, TypeID);
-
- Cond = Cond ? Builder.CreateOr(Cond, Compare) : Compare;
- }
-
- if (Cond) {
- BasicBlock *NoMatchBB = BasicBlock::Create(Context);
- Builder.CreateCondBr(Cond, Dest, NoMatchBB);
- BeginBlock(NoMatchBB);
- }
- }
- break;
- }
-}
-
-void TreeToLLVM::RenderGIMPLE_GOTO(gimple stmt) {
- tree dest = gimple_goto_dest(stmt);
-
- if (TREE_CODE(dest) == LABEL_DECL) {
- // Direct branch.
- Builder.CreateBr(getLabelDeclBlock(dest));
- return;
- }
-
- // Indirect branch.
- basic_block source = gimple_bb(stmt);
- IndirectBrInst *Br = Builder.CreateIndirectBr(EmitRegister(dest),
- EDGE_COUNT(source->succs));
-
- // Add the list of possible destinations.
- edge e;
- edge_iterator ei;
- FOR_EACH_EDGE (e, ei, source->succs)
- Br->addDestination(getBasicBlock(e->dest));
-}
-
-void TreeToLLVM::RenderGIMPLE_RESX(gimple stmt) {
- // Reraise an exception. If this statement is inside an exception handling
- // region then the reraised exception may be caught by the current function,
- // in which case it can be simplified into a branch.
- int DstLPadNo = lookup_stmt_eh_lp(stmt);
- eh_region dst_rgn =
- DstLPadNo ? get_eh_region_from_lp_number(DstLPadNo) : NULL;
- eh_region src_rgn = get_eh_region_from_number(gimple_resx_region(stmt));
-
- if (!src_rgn) {
- // Unreachable block?
- Builder.CreateUnreachable();
- return;
- }
-
- if (dst_rgn) {
- if (DstLPadNo < 0) {
- // The reraise is inside a must-not-throw region. Turn the reraise into a
- // call to the failure routine (eg: std::terminate).
- assert(dst_rgn->type == ERT_MUST_NOT_THROW && "Unexpected region type!");
-
- // Branch to the block containing the failure code.
- Builder.CreateBr(getFailureBlock(dst_rgn->index));
- return;
- }
-
- // Use the exception pointer and filter value for the source region as the
- // values for the destination region.
- Value *ExcPtr = Builder.CreateLoad(getExceptionPtr(src_rgn->index));
- Builder.CreateStore(ExcPtr, getExceptionPtr(dst_rgn->index));
- Value *Filter = Builder.CreateLoad(getExceptionFilter(src_rgn->index));
- Builder.CreateStore(Filter, getExceptionFilter(dst_rgn->index));
-
- // Branch to the post landing pad for the destination region.
- eh_landing_pad lp = get_eh_landing_pad_from_number(DstLPadNo);
- assert(lp && "Post landing pad not found!");
- Builder.CreateBr(getLabelDeclBlock(lp->post_landing_pad));
- return;
- }
-
- // The exception unwinds out of the function. Note the exception to unwind.
- if (!RewindTmp) {
- RewindTmp = CreateTemporary(Type::getInt8PtrTy(Context));
- RewindTmp->setName("rewind_tmp");
- }
- Value *ExcPtr = Builder.CreateLoad(getExceptionPtr(src_rgn->index));
- Builder.CreateStore(ExcPtr, RewindTmp);
-
- // Jump to the block containing the rewind code.
- if (!RewindBB)
- RewindBB = BasicBlock::Create(Context, "rewind");
- Builder.CreateBr(RewindBB);
-}
-
-void TreeToLLVM::RenderGIMPLE_RETURN(gimple stmt) {
- tree retval = gimple_return_retval(stmt);
- tree result = DECL_RESULT(current_function_decl);
-
- if (retval && retval != error_mark_node && retval != result) {
- // Store the return value to the function's DECL_RESULT.
- MemRef DestLoc(DECL_LOCAL(result), 1, false); // FIXME: What alignment?
- if (AGGREGATE_TYPE_P(TREE_TYPE(result))) {
- EmitAggregate(retval, DestLoc);
- } else {
- Value *Val = Builder.CreateBitCast(EmitRegister(retval),
- GetRegType(TREE_TYPE(result)));
- StoreRegisterToMemory(Val, DestLoc, TREE_TYPE(result), Builder);
- }
- }
-
- // Emit a branch to the exit label.
- Builder.CreateBr(ReturnBB);
-}
-
-void TreeToLLVM::RenderGIMPLE_SWITCH(gimple stmt) {
- // Emit the condition.
- Value *Index = EmitRegister(gimple_switch_index(stmt));
- bool IndexIsSigned = !TYPE_UNSIGNED(TREE_TYPE(gimple_switch_index(stmt)));
-
- // Create the switch instruction.
- tree default_label = CASE_LABEL(gimple_switch_label(stmt, 0));
- SwitchInst *SI = Builder.CreateSwitch(Index, getLabelDeclBlock(default_label),
- gimple_switch_num_labels(stmt));
-
- // Add the switch cases.
- BasicBlock *IfBlock = 0; // Set if a range was output as an "if".
- for (size_t i = 1, e = gimple_switch_num_labels(stmt); i != e; ++i) {
- tree label = gimple_switch_label(stmt, i);
- BasicBlock *Dest = getLabelDeclBlock(CASE_LABEL(label));
-
- // Convert the integer to the right type.
- Value *Val = EmitRegister(CASE_LOW(label));
- Val = CastToAnyType(Val, !TYPE_UNSIGNED(TREE_TYPE(CASE_LOW(label))),
- Index->getType(), IndexIsSigned);
- ConstantInt *LowC = cast<ConstantInt>(Val);
-
- if (!CASE_HIGH(label)) {
- SI->addCase(LowC, Dest); // Single destination.
- continue;
- }
-
- // Otherwise, we have a range, like 'case 1 ... 17'.
- Val = EmitRegister(CASE_HIGH(label));
- // Make sure the case value is the same type as the switch expression
- Val = CastToAnyType(Val, !TYPE_UNSIGNED(TREE_TYPE(CASE_HIGH(label))),
- Index->getType(), IndexIsSigned);
- ConstantInt *HighC = cast<ConstantInt>(Val);
-
- APInt Range = HighC->getValue() - LowC->getValue();
- if (Range.ult(APInt(Range.getBitWidth(), 64))) {
- // Add all of the necessary successors to the switch.
- APInt CurrentValue = LowC->getValue();
- while (1) {
- SI->addCase(LowC, Dest);
- if (LowC == HighC) break; // Emitted the last one.
- CurrentValue++;
- LowC = ConstantInt::get(Context, CurrentValue);
- }
- } else {
- // The range is too big to add to the switch - emit an "if".
- if (!IfBlock) {
- IfBlock = BasicBlock::Create(Context);
- BeginBlock(IfBlock);
- }
- Value *Diff = Builder.CreateSub(Index, LowC);
- Value *Cond = Builder.CreateICmpULE(Diff,
- ConstantInt::get(Context, Range));
- BasicBlock *False_Block = BasicBlock::Create(Context);
- Builder.CreateCondBr(Cond, Dest, False_Block);
- BeginBlock(False_Block);
- }
- }
-
- if (IfBlock) {
- Builder.CreateBr(SI->getDefaultDest());
- SI->setSuccessor(0, IfBlock);
- }
-}
-
-
-//===----------------------------------------------------------------------===//
-// ... Render helpers ...
-//===----------------------------------------------------------------------===//
-
-/// EmitAssignRHS - Convert the RHS of a scalar GIMPLE_ASSIGN to LLVM.
-Value *TreeToLLVM::EmitAssignRHS(gimple stmt) {
- // Loads from memory and other non-register expressions are handled by
- // EmitAssignSingleRHS.
- if (get_gimple_rhs_class(gimple_expr_code(stmt)) == GIMPLE_SINGLE_RHS) {
- Value *RHS = EmitAssignSingleRHS(gimple_assign_rhs1(stmt));
- assert(RHS->getType() == GetRegType(TREE_TYPE(gimple_assign_rhs1(stmt))) &&
- "RHS has wrong type!");
- return RHS;
- }
-
- // The RHS is a register expression. Emit it now.
- tree type = TREE_TYPE(gimple_assign_lhs(stmt));
- tree_code code = gimple_assign_rhs_code(stmt);
- tree rhs1 = gimple_assign_rhs1(stmt);
- tree rhs2 = gimple_assign_rhs2(stmt);
-
- Value *RHS = 0;
- switch (code) {
- default:
- dump(stmt);
- llvm_unreachable("Unhandled GIMPLE assignment!");
-
- // Unary expressions.
- case ABS_EXPR:
- RHS = EmitReg_ABS_EXPR(rhs1); break;
- case BIT_NOT_EXPR:
- RHS = EmitReg_BIT_NOT_EXPR(rhs1); break;
- case CONJ_EXPR:
- RHS = EmitReg_CONJ_EXPR(rhs1); break;
- case CONVERT_EXPR:
- case FIX_TRUNC_EXPR:
- case FLOAT_EXPR:
- case NOP_EXPR:
- RHS = EmitReg_CONVERT_EXPR(type, rhs1); break;
- case NEGATE_EXPR:
- RHS = EmitReg_NEGATE_EXPR(rhs1); break;
- case PAREN_EXPR:
- RHS = EmitReg_PAREN_EXPR(rhs1); break;
- case TRUTH_NOT_EXPR:
- RHS = EmitReg_TRUTH_NOT_EXPR(type, rhs1); break;
-
- // Comparisons.
- case EQ_EXPR:
- case GE_EXPR:
- case GT_EXPR:
- case LE_EXPR:
- case LT_EXPR:
- case LTGT_EXPR:
- case NE_EXPR:
- case ORDERED_EXPR:
- case UNEQ_EXPR:
- case UNGE_EXPR:
- case UNGT_EXPR:
- case UNLE_EXPR:
- case UNLT_EXPR:
- case UNORDERED_EXPR:
- // The GCC result may be of any integer type.
- RHS = Builder.CreateZExt(EmitCompare(rhs1, rhs2, code), GetRegType(type));
- break;
-
- // Binary expressions.
- case BIT_AND_EXPR:
- RHS = EmitReg_BIT_AND_EXPR(rhs1, rhs2); break;
- case BIT_IOR_EXPR:
- RHS = EmitReg_BIT_IOR_EXPR(rhs1, rhs2); break;
- case BIT_XOR_EXPR:
- RHS = EmitReg_BIT_XOR_EXPR(rhs1, rhs2); break;
- case CEIL_DIV_EXPR:
- RHS = EmitReg_CEIL_DIV_EXPR(type, rhs1, rhs2); break;
- case COMPLEX_EXPR:
- RHS = EmitReg_COMPLEX_EXPR(rhs1, rhs2); break;
- case EXACT_DIV_EXPR:
- RHS = EmitReg_TRUNC_DIV_EXPR(rhs1, rhs2, /*isExact*/true); break;
- case FLOOR_DIV_EXPR:
- RHS = EmitReg_FLOOR_DIV_EXPR(type, rhs1, rhs2); break;
- case FLOOR_MOD_EXPR:
- RHS = EmitReg_FLOOR_MOD_EXPR(type, rhs1, rhs2); break;
- case LROTATE_EXPR:
- RHS = EmitReg_RotateOp(type, rhs1, rhs2, Instruction::Shl,
- Instruction::LShr);
- break;
- case LSHIFT_EXPR:
- RHS = EmitReg_ShiftOp(rhs1, rhs2, Instruction::Shl); break;
- case MAX_EXPR:
- RHS = EmitReg_MinMaxExpr(type, rhs1, rhs2, ICmpInst::ICMP_UGE,
- ICmpInst::ICMP_SGE, FCmpInst::FCMP_OGE, true);
- break;
- case MIN_EXPR:
- RHS = EmitReg_MinMaxExpr(type, rhs1, rhs2, ICmpInst::ICMP_ULE,
- ICmpInst::ICMP_SLE, FCmpInst::FCMP_OLE, false);
- break;
- case MINUS_EXPR:
- RHS = EmitReg_MINUS_EXPR(rhs1, rhs2); break;
- case MULT_EXPR:
- RHS = EmitReg_MULT_EXPR(rhs1, rhs2); break;
- case PLUS_EXPR:
- RHS = EmitReg_PLUS_EXPR(rhs1, rhs2); break;
- case POINTER_PLUS_EXPR:
- RHS = EmitReg_POINTER_PLUS_EXPR(type, rhs1, rhs2); break;
- case RDIV_EXPR:
- RHS = EmitReg_RDIV_EXPR(rhs1, rhs2); break;
- case ROUND_DIV_EXPR:
- RHS = EmitReg_ROUND_DIV_EXPR(type, rhs1, rhs2); break;
- case RROTATE_EXPR:
- RHS = EmitReg_RotateOp(type, rhs1, rhs2, Instruction::LShr,
- Instruction::Shl);
- break;
- case RSHIFT_EXPR:
- RHS = EmitReg_ShiftOp(rhs1, rhs2, TYPE_UNSIGNED(type) ?
- Instruction::LShr : Instruction::AShr);
- break;
- case TRUNC_DIV_EXPR:
- RHS = EmitReg_TRUNC_DIV_EXPR(rhs1, rhs2, /*isExact*/false); break;
- case TRUNC_MOD_EXPR:
- RHS = EmitReg_TRUNC_MOD_EXPR(rhs1, rhs2); break;
- case TRUTH_AND_EXPR:
- RHS = EmitReg_TruthOp(type, rhs1, rhs2, Instruction::And); break;
- case TRUTH_OR_EXPR:
- RHS = EmitReg_TruthOp(type, rhs1, rhs2, Instruction::Or); break;
- case TRUTH_XOR_EXPR:
- RHS = EmitReg_TruthOp(type, rhs1, rhs2, Instruction::Xor); break;
- }
-
- assert(RHS->getType() == GetRegType(type) && "RHS has wrong type!");
- return RHS;
-}
-
-/// EmitAssignSingleRHS - Helper for EmitAssignRHS. Handles those RHS that are
-/// not register expressions.
-Value *TreeToLLVM::EmitAssignSingleRHS(tree rhs) {
- assert(!AGGREGATE_TYPE_P(TREE_TYPE(rhs)) && "Expected a scalar type!");
-
- switch (TREE_CODE(rhs)) {
- // Catch-all for SSA names, constants etc.
- default: return EmitRegister(rhs);
-
- // Expressions (tcc_expression).
- case ADDR_EXPR: return EmitADDR_EXPR(rhs);
- case OBJ_TYPE_REF: return EmitOBJ_TYPE_REF(rhs);
-
- // Exceptional (tcc_exceptional).
- case CONSTRUCTOR:
- // Vector constant constructors are gimple invariant.
- return is_gimple_constant(rhs) ?
- EmitRegisterConstant(rhs) : EmitCONSTRUCTOR(rhs, 0);
-
- // References (tcc_reference).
- case ARRAY_REF:
- case ARRAY_RANGE_REF:
- case BIT_FIELD_REF:
- case COMPONENT_REF:
- case IMAGPART_EXPR:
- case INDIRECT_REF:
- case REALPART_EXPR:
- case TARGET_MEM_REF:
- case VIEW_CONVERT_EXPR:
- return EmitLoadOfLValue(rhs); // Load from memory.
-
- // Declarations (tcc_declaration).
- case PARM_DECL:
- case RESULT_DECL:
- case VAR_DECL:
- return EmitLoadOfLValue(rhs); // Load from memory.
-
- // Constants (tcc_constant).
- case STRING_CST:
- return EmitLoadOfLValue(rhs); // Load from memory.
- }
-}
-
-/// OutputCallRHS - Convert the RHS of a GIMPLE_CALL.
-Value *TreeToLLVM::OutputCallRHS(gimple stmt, const MemRef *DestLoc) {
- // Check for a built-in function call. If we can lower it directly, do so
- // now.
- tree fndecl = gimple_call_fndecl(stmt);
- if (fndecl && DECL_BUILT_IN(fndecl) &&
- DECL_BUILT_IN_CLASS(fndecl) != BUILT_IN_FRONTEND) {
- Value *Res = 0;
- if (EmitBuiltinCall(stmt, fndecl, DestLoc, Res))
- return Res ? Mem2Reg(Res, gimple_call_return_type(stmt), Builder) : 0;
- }
-
- tree call_expr = gimple_call_fn(stmt);
- assert(TREE_TYPE (call_expr) &&
- (TREE_CODE(TREE_TYPE (call_expr)) == POINTER_TYPE ||
- TREE_CODE(TREE_TYPE (call_expr)) == REFERENCE_TYPE)
- && "Not calling a function pointer?");
-
- tree function_type = TREE_TYPE(TREE_TYPE (call_expr));
- Value *Callee = EmitRegister(call_expr);
- CallingConv::ID CallingConv;
- AttrListPtr PAL;
-
- const Type *Ty =
- TheTypeConverter->ConvertFunctionType(function_type,
- fndecl,
- gimple_call_chain(stmt),
- CallingConv, PAL);
-
- // If this is a direct call to a function using a static chain then we need
- // to ensure the function type is the one just calculated: it has an extra
- // parameter for the chain.
- Callee = Builder.CreateBitCast(Callee, Ty->getPointerTo());
-
- Value *Result = EmitCallOf(Callee, stmt, DestLoc, PAL);
-
- // When calling a "noreturn" function output an unreachable instruction right
- // after the function to prevent LLVM from thinking that control flow will
- // fall into the subsequent block.
- if (gimple_call_flags(stmt) & ECF_NORETURN) {
- Builder.CreateUnreachable();
- BeginBlock(BasicBlock::Create(Context));
- }
-
- return Result ? Mem2Reg(Result, gimple_call_return_type(stmt), Builder) : 0;
-}
-
-/// WriteScalarToLHS - Store RHS, a non-aggregate value, into the given LHS.
-void TreeToLLVM::WriteScalarToLHS(tree lhs, Value *RHS) {
- // Perform a useless type conversion (useless_type_conversion_p).
- RHS = UselesslyTypeConvert(RHS, GetRegType(TREE_TYPE(lhs)));
-
- // If this is the definition of an ssa name, record it in the SSANames map.
- if (TREE_CODE(lhs) == SSA_NAME) {
- if (flag_verbose_asm)
- NameValue(RHS, lhs);
- DefineSSAName(lhs, RHS);
- return;
- }
-
- if (canEmitRegisterVariable(lhs)) {
- // If this is a store to a register variable, EmitLV can't handle the dest
- // (there is no l-value of a register variable). Emit an inline asm node
- // that copies the value into the specified register.
- EmitModifyOfRegisterVariable(lhs, RHS);
- return;
- }
-
- LValue LV = EmitLV(lhs);
- LV.Volatile = TREE_THIS_VOLATILE(lhs);
- // TODO: Arrange for Volatile to already be set in the LValue.
- if (!LV.isBitfield()) {
- // Non-bitfield, scalar value. Just emit a store.
- StoreRegisterToMemory(RHS, LV, TREE_TYPE(lhs), Builder);
- return;
- }
-
- // Last case, this is a store to a bitfield, so we have to emit a
- // read/modify/write sequence.
-
- if (!LV.BitSize)
- return;
-
- unsigned Alignment = LV.getAlignment();
-
- const Type *ValTy = cast<PointerType>(LV.Ptr->getType())->getElementType();
- unsigned ValSizeInBits = ValTy->getPrimitiveSizeInBits();
-
- // The number of stores needed to write the entire bitfield.
- unsigned Strides = 1 + (LV.BitStart + LV.BitSize - 1) / ValSizeInBits;
-
- assert(ValTy->isIntegerTy() && "Invalid bitfield lvalue!");
- assert(ValSizeInBits > LV.BitStart && "Bad bitfield lvalue!");
- assert(ValSizeInBits >= LV.BitSize && "Bad bitfield lvalue!");
- assert(2*ValSizeInBits > LV.BitSize+LV.BitStart && "Bad bitfield lvalue!");
-
- bool Signed = !TYPE_UNSIGNED(TREE_TYPE(lhs));
- RHS = CastToAnyType(RHS, Signed, ValTy, Signed);
-
- for (unsigned I = 0; I < Strides; I++) {
- unsigned Index = BYTES_BIG_ENDIAN ? Strides - I - 1 : I; // LSB first
- unsigned ThisFirstBit = Index * ValSizeInBits;
- unsigned ThisLastBitPlusOne = ThisFirstBit + ValSizeInBits;
- if (ThisFirstBit < LV.BitStart)
- ThisFirstBit = LV.BitStart;
- if (ThisLastBitPlusOne > LV.BitStart+LV.BitSize)
- ThisLastBitPlusOne = LV.BitStart+LV.BitSize;
-
- Value *Ptr = Index ?
- Builder.CreateGEP(LV.Ptr,
- ConstantInt::get(Type::getInt32Ty(Context), Index)) :
- LV.Ptr;
- LoadInst *LI = Builder.CreateLoad(Ptr, LV.Volatile);
- LI->setAlignment(Alignment);
- Value *OldVal = LI;
- Value *NewVal = RHS;
-
- unsigned BitsInVal = ThisLastBitPlusOne - ThisFirstBit;
- unsigned FirstBitInVal = ThisFirstBit % ValSizeInBits;
-
- if (BYTES_BIG_ENDIAN)
- FirstBitInVal = ValSizeInBits-FirstBitInVal-BitsInVal;
-
- // If not storing into the zero'th bit, shift the Src value to the left.
- if (FirstBitInVal) {
- Value *ShAmt = ConstantInt::get(ValTy, FirstBitInVal);
- NewVal = Builder.CreateShl(NewVal, ShAmt);
- }
-
- // Next, if this doesn't touch the top bit, mask out any bits that shouldn't
- // be set in the result.
- uint64_t MaskVal = 1;
- MaskVal = ((MaskVal << BitsInVal)-1) << FirstBitInVal;
- Constant *Mask = ConstantInt::get(Type::getInt64Ty(Context), MaskVal);
- Mask = Builder.getFolder().CreateTruncOrBitCast(Mask, ValTy);
-
- if (FirstBitInVal+BitsInVal != ValSizeInBits)
- NewVal = Builder.CreateAnd(NewVal, Mask);
-
- // Next, mask out the bits this bit-field should include from the old value.
- Mask = Builder.getFolder().CreateNot(Mask);
- OldVal = Builder.CreateAnd(OldVal, Mask);
-
- // Finally, merge the two together and store it.
- NewVal = Builder.CreateOr(OldVal, NewVal);
-
- StoreInst *SI = Builder.CreateStore(NewVal, Ptr, LV.Volatile);
- SI->setAlignment(Alignment);
-
- if (I + 1 < Strides) {
- Value *ShAmt = ConstantInt::get(ValTy, BitsInVal);
- RHS = Builder.CreateLShr(RHS, ShAmt);
- }
- }
-}
Removed: dragonegg/trunk/llvm-debug.cpp
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/llvm-debug.cpp?rev=127407&view=auto
==============================================================================
--- dragonegg/trunk/llvm-debug.cpp (original)
+++ dragonegg/trunk/llvm-debug.cpp (removed)
@@ -1,1811 +0,0 @@
-//===------------ llvm-debug.cpp - Debug information gathering ------------===//
-//
-// Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 Jim Laskey,
-// Duncan Sands et al.
-//
-// This file is part of DragonEgg.
-//
-// DragonEgg is free software; you can redistribute it and/or modify it under
-// the terms of the GNU General Public License as published by the Free Software
-// Foundation; either version 2, or (at your option) any later version.
-//
-// DragonEgg is distributed in the hope that it will be useful, but WITHOUT ANY
-// WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
-// A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-//
-// You should have received a copy of the GNU General Public License along with
-// DragonEgg; see the file COPYING. If not, write to the Free Software
-// Foundation, 51 Franklin Street, Suite 500, Boston, MA 02110-1335, USA.
-//
-//===----------------------------------------------------------------------===//
-// This file implements debug information gathering.
-//===----------------------------------------------------------------------===//
-
-// Plugin headers
-#include "llvm-debug.h"
-
-// LLVM headers
-#include "llvm/Module.h"
-#include "llvm/ADT/STLExtras.h"
-
-// System headers
-#include <gmp.h>
-
-// GCC headers
-extern "C" {
-#include "config.h"
-// Stop GCC declaring 'getopt' as it can clash with the system's declaration.
-#undef HAVE_DECL_GETOPT
-#include "system.h"
-#include "coretypes.h"
-#include "target.h"
-#include "tree.h"
-
-#include "flags.h"
-#include "langhooks.h"
-#include "toplev.h"
-#include "version.h"
-}
-
-using namespace llvm;
-using namespace llvm::dwarf;
-
-#ifndef LLVMTESTDEBUG
-#define DEBUGASSERT(S) ((void)0)
-#else
-#define DEBUGASSERT(S) assert(S)
-#endif
-
-
-/// DirectoryAndFile - Extract the directory and file name from a path. If no
-/// directory is specified, then use the source working directory.
-static void DirectoryAndFile(const std::string &FullPath,
- std::string &Directory, std::string &FileName) {
- // Look for the directory slash.
- size_t Slash = FullPath.rfind('/');
-
- // If no slash
- if (Slash == std::string::npos) {
- // The entire path is the file name.
- Directory = "";
- FileName = FullPath;
- } else {
- // Separate the directory from the file name.
- Directory = FullPath.substr(0, Slash);
- FileName = FullPath.substr(Slash + 1);
- }
-
- // If no directory present then use source working directory.
- if (Directory.empty() || Directory[0] != '/') {
- Directory = std::string(get_src_pwd()) + "/" + Directory;
- }
-}
-
-/// NodeSizeInBits - Returns the size in bits stored in a tree node regardless
-/// of whether the node is a TYPE or DECL.
-static uint64_t NodeSizeInBits(tree Node) {
- if (TREE_CODE(Node) == ERROR_MARK) {
- return BITS_PER_WORD;
- } else if (TYPE_P(Node)) {
- if (TYPE_SIZE(Node) == NULL_TREE)
- return 0;
- else if (isInt64(TYPE_SIZE(Node), 1))
- return getINTEGER_CSTVal(TYPE_SIZE(Node));
- else
- return TYPE_ALIGN(Node);
- } else if (DECL_P(Node)) {
- if (DECL_SIZE(Node) == NULL_TREE)
- return 0;
- else if (isInt64(DECL_SIZE(Node), 1))
- return getINTEGER_CSTVal(DECL_SIZE(Node));
- else
- return DECL_ALIGN(Node);
- }
-
- return 0;
-}
-
-/// NodeAlignInBits - Returns the alignment in bits stored in a tree node
-/// regardless of whether the node is a TYPE or DECL.
-static uint64_t NodeAlignInBits(tree Node) {
- if (TREE_CODE(Node) == ERROR_MARK) return BITS_PER_WORD;
- if (TYPE_P(Node)) return TYPE_ALIGN(Node);
- if (DECL_P(Node)) return DECL_ALIGN(Node);
- return BITS_PER_WORD;
-}
-
-/// FieldType - Returns the type node of a structure member field.
-///
-static tree FieldType(tree Field) {
- if (TREE_CODE (Field) == ERROR_MARK) return integer_type_node;
- return DECL_BIT_FIELD_TYPE(Field) ?
- DECL_BIT_FIELD_TYPE(Field) : TREE_TYPE (Field);
-}
-
-/// GetNodeName - Returns the name stored in a node regardless of whether the
-/// node is a TYPE or DECL.
-static StringRef GetNodeName(tree Node) {
- tree Name = NULL;
-
- if (DECL_P(Node)) {
- Name = DECL_NAME(Node);
- } else if (TYPE_P(Node)) {
- Name = TYPE_NAME(Node);
- }
-
- if (Name) {
- if (TREE_CODE(Name) == IDENTIFIER_NODE) {
- return IDENTIFIER_POINTER(Name);
- } else if (TREE_CODE(Name) == TYPE_DECL && DECL_NAME(Name) &&
- !DECL_IGNORED_P(Name)) {
- return StringRef(IDENTIFIER_POINTER(DECL_NAME(Name)));
- }
- }
-
- return StringRef();
-}
-
-/// GetNodeLocation - Returns the location stored in a node regardless of
-/// whether the node is a TYPE or DECL. UseStub is true if we should consider
-/// the type stub as the actually location (ignored in struct/unions/enums.)
-static expanded_location GetNodeLocation(tree Node, bool UseStub = true) {
- expanded_location Location = { NULL, 0, 0, false };
-
- if (Node == NULL_TREE)
- return Location;
-
- tree Name = NULL;
-
- if (DECL_P(Node)) {
- Name = DECL_NAME(Node);
- } else if (TYPE_P(Node)) {
- Name = TYPE_NAME(Node);
- }
-
- if (Name) {
- if (TYPE_STUB_DECL(Name)) {
- tree Stub = TYPE_STUB_DECL(Name);
- Location = expand_location(DECL_SOURCE_LOCATION(Stub));
- } else if (DECL_P(Name)) {
- Location = expand_location(DECL_SOURCE_LOCATION(Name));
- }
- }
-
- if (!Location.line) {
- if (UseStub && TYPE_STUB_DECL(Node)) {
- tree Stub = TYPE_STUB_DECL(Node);
- Location = expand_location(DECL_SOURCE_LOCATION(Stub));
- } else if (DECL_P(Node)) {
- Location = expand_location(DECL_SOURCE_LOCATION(Node));
- }
- }
-
- return Location;
-}
-
-static StringRef getLinkageName(tree Node) {
-
- // Use llvm value name as linkage name if it is available.
- if (DECL_LLVM_SET_P(Node)) {
- Value *V = DECL_LLVM(Node);
- return V->getName();
- }
-
- tree decl_name = DECL_NAME(Node);
- if (decl_name != NULL && IDENTIFIER_POINTER (decl_name) != NULL) {
- if (TREE_PUBLIC(Node) &&
- DECL_ASSEMBLER_NAME(Node) != DECL_NAME(Node) &&
- !DECL_ABSTRACT(Node)) {
- return StringRef(IDENTIFIER_POINTER(DECL_ASSEMBLER_NAME(Node)));
- }
- }
- return StringRef();
-}
-
-DebugInfo::DebugInfo(Module *m)
-: M(m)
-, DebugFactory(*m)
-, CurFullPath("")
-, CurLineNo(0)
-, PrevFullPath("")
-, PrevLineNo(0)
-, PrevBB(NULL)
-, RegionStack()
-{}
-
-/// getFunctionName - Get function name for the given FnDecl. If the
-/// name is constructred on demand (e.g. C++ destructor) then the name
-/// is stored on the side.
-StringRef DebugInfo::getFunctionName(tree FnDecl) {
- StringRef FnNodeName = GetNodeName(FnDecl);
- // Use dwarf_name to construct function names. In C++ this is used to
- // create human readable destructor names.
- StringRef FnName = lang_hooks.dwarf_name(FnDecl, 0);
- if (FnNodeName.equals(FnName))
- return FnNodeName;
-
- // Use name returned by dwarf_name. It is in a temp. storage so make a
- // copy first.
- char *StrPtr = FunctionNames.Allocate<char>(FnName.size() + 1);
- strncpy(StrPtr, FnName.data(), FnName.size());
- StrPtr[FnName.size()] = 0;
- return StringRef(StrPtr);
-}
-
-/// EmitFunctionStart - Constructs the debug code for entering a function.
-void DebugInfo::EmitFunctionStart(tree FnDecl, Function *Fn) {
- DIType FNType = getOrCreateType(TREE_TYPE(FnDecl));
-
- std::map<tree_node *, WeakVH >::iterator I = SPCache.find(FnDecl);
- if (I != SPCache.end()) {
- DISubprogram SPDecl(cast<MDNode>(I->second));
- DISubprogram SP =
- DebugFactory.CreateSubprogramDefinition(SPDecl);
- SPDecl->replaceAllUsesWith(SP);
-
- // Push function on region stack.
- RegionStack.push_back(WeakVH(SP));
- RegionMap[FnDecl] = WeakVH(SP);
- return;
- }
-
- bool ArtificialFnWithAbstractOrigin = false;
- // If this artificial function has abstract origin then put this function
- // at module scope. The abstract copy will be placed in appropriate region.
- if (DECL_ARTIFICIAL (FnDecl)
- && DECL_ABSTRACT_ORIGIN (FnDecl)
- && DECL_ABSTRACT_ORIGIN (FnDecl) != FnDecl)
- ArtificialFnWithAbstractOrigin = true;
-
- DIDescriptor SPContext = ArtificialFnWithAbstractOrigin ?
- getOrCreateFile(main_input_filename) :
- findRegion (DECL_CONTEXT(FnDecl));
-
- // Creating context may have triggered creation of this SP descriptor. So
- // check the cache again.
- I = SPCache.find(FnDecl);
- if (I != SPCache.end()) {
- DISubprogram SPDecl(cast<MDNode>(I->second));
- DISubprogram SP =
- DebugFactory.CreateSubprogramDefinition(SPDecl);
- SPDecl->replaceAllUsesWith(SP);
-
- // Push function on region stack.
- RegionStack.push_back(WeakVH(SP));
- RegionMap[FnDecl] = WeakVH(SP);
- return;
- }
-
- // Gather location information.
- expanded_location Loc = GetNodeLocation(FnDecl, false);
- StringRef LinkageName = getLinkageName(FnDecl);
-
- unsigned lineno = CurLineNo;
-
- unsigned Virtuality = 0;
- unsigned VIndex = 0;
- DIType ContainingType;
- if (DECL_VINDEX (FnDecl) &&
- DECL_CONTEXT (FnDecl) && TYPE_P((DECL_CONTEXT (FnDecl)))) { // Workaround GCC PR42653
- if (host_integerp (DECL_VINDEX (FnDecl), 0))
- VIndex = tree_low_cst (DECL_VINDEX (FnDecl), 0);
- Virtuality = dwarf::DW_VIRTUALITY_virtual;
- ContainingType = getOrCreateType(DECL_CONTEXT (FnDecl));
- }
-
- StringRef FnName = getFunctionName(FnDecl);
-
- DISubprogram SP =
- DebugFactory.CreateSubprogram(SPContext,
- FnName, FnName,
- LinkageName,
- getOrCreateFile(Loc.file), lineno,
- FNType,
- Fn->hasInternalLinkage(),
- true /*definition*/,
- Virtuality, VIndex, ContainingType,
- DECL_ARTIFICIAL (FnDecl), optimize);
-
- SPCache[FnDecl] = WeakVH(SP);
-
- // Push function on region stack.
- RegionStack.push_back(WeakVH(SP));
- RegionMap[FnDecl] = WeakVH(SP);
-}
-
-/// getOrCreateNameSpace - Get name space descriptor for the tree node.
-DINameSpace DebugInfo::getOrCreateNameSpace(tree Node, DIDescriptor Context) {
- std::map<tree_node *, WeakVH >::iterator I =
- NameSpaceCache.find(Node);
- if (I != NameSpaceCache.end())
- return DINameSpace(cast<MDNode>(I->second));
-
- expanded_location Loc = GetNodeLocation(Node, false);
- DINameSpace DNS =
- DebugFactory.CreateNameSpace(Context, GetNodeName(Node),
- getOrCreateFile(Loc.file), Loc.line);
-
- NameSpaceCache[Node] = WeakVH(DNS);
- return DNS;
-}
-
-/// findRegion - Find tree_node N's region.
-DIDescriptor DebugInfo::findRegion(tree Node) {
- if (Node == NULL_TREE)
- return getOrCreateFile(main_input_filename);
-
- std::map<tree_node *, WeakVH>::iterator I = RegionMap.find(Node);
- if (I != RegionMap.end())
- if (MDNode *R = dyn_cast_or_null<MDNode>(&*I->second))
- return DIDescriptor(R);
-
- if (TYPE_P (Node)) {
- DIType Ty = getOrCreateType(Node);
- return DIDescriptor(Ty);
- } else if (DECL_P (Node)) {
- if (TREE_CODE (Node) == NAMESPACE_DECL) {
- DIDescriptor NSContext = findRegion(DECL_CONTEXT(Node));
- DINameSpace NS = getOrCreateNameSpace(Node, NSContext);
- return DIDescriptor(NS);
- }
- return findRegion (DECL_CONTEXT (Node));
- }
-
- // Otherwise main compile unit covers everything.
- return getOrCreateFile(main_input_filename);
-}
-
-/// EmitFunctionEnd - Pop the region stack and reset current lexical block.
-void DebugInfo::EmitFunctionEnd(bool EndFunction) {
- assert(!RegionStack.empty() && "Region stack mismatch, stack empty!");
- RegionStack.pop_back();
- // Blocks get erased; clearing these is needed for determinism, and also
- // a good idea if the next function gets inlined.
- if (EndFunction) {
- PrevBB = NULL;
- PrevLineNo = 0;
- PrevFullPath = NULL;
- }
-}
-
-/// EmitDeclare - Constructs the debug code for allocation of a new variable.
-void DebugInfo::EmitDeclare(tree decl, unsigned Tag, const char *Name,
- tree type, Value *AI, LLVMBuilder &Builder) {
-
- // Ignore compiler generated temporaries.
- if (DECL_IGNORED_P(decl))
- return;
-
- assert(!RegionStack.empty() && "Region stack mismatch, stack empty!");
-
- expanded_location Loc = GetNodeLocation(decl, false);
-
- // Construct variable.
- DIScope VarScope = DIScope(cast<MDNode>(RegionStack.back()));
- DIType Ty = getOrCreateType(type);
- if (!Ty && TREE_CODE(type) == OFFSET_TYPE)
- Ty = createPointerType(TREE_TYPE(type));
- if (DECL_ARTIFICIAL (decl))
- Ty = DebugFactory.CreateArtificialType(Ty);
- // If type info is not available then do not emit debug info for this var.
- if (!Ty)
- return;
- llvm::DIVariable D =
- DebugFactory.CreateVariable(Tag, VarScope,
- Name, getOrCreateFile(Loc.file),
- Loc.line, Ty, optimize);
-
- Instruction *Call =
- DebugFactory.InsertDeclare(AI, D, Builder.GetInsertBlock());
-
- Call->setDebugLoc(DebugLoc::get(Loc.line, 0, VarScope));
-}
-
-/// EmitStopPoint - Set current source location.
-void DebugInfo::EmitStopPoint(BasicBlock *CurBB, LLVMBuilder &Builder) {
- // Don't bother if things are the same as last time.
- if (PrevLineNo == CurLineNo &&
- PrevBB == CurBB &&
- (PrevFullPath == CurFullPath ||
- !strcmp(PrevFullPath, CurFullPath))) return;
- if (!CurFullPath[0] || CurLineNo == 0) return;
-
- // Update last state.
- PrevFullPath = CurFullPath;
- PrevLineNo = CurLineNo;
- PrevBB = CurBB;
-
- if (RegionStack.empty())
- return;
- MDNode *Scope = cast<MDNode>(RegionStack.back());
- Builder.SetCurrentDebugLocation(DebugLoc::get(CurLineNo,0/*col*/,Scope));
-}
-
-/// EmitGlobalVariable - Emit information about a global variable.
-///
-void DebugInfo::EmitGlobalVariable(GlobalVariable *GV, tree decl) {
- if (DECL_ARTIFICIAL(decl) || DECL_IGNORED_P(decl))
- return;
- // Gather location information.
- expanded_location Loc = expand_location(DECL_SOURCE_LOCATION(decl));
- DIType TyD = getOrCreateType(TREE_TYPE(decl));
- StringRef DispName = GV->getName();
- if (DECL_NAME(decl)) {
- if (IDENTIFIER_POINTER(DECL_NAME(decl)))
- DispName = IDENTIFIER_POINTER(DECL_NAME(decl));
- }
- StringRef LinkageName;
- // The gdb does not expect linkage names for function local statics.
- if (DECL_CONTEXT (decl))
- if (TREE_CODE (DECL_CONTEXT (decl)) != FUNCTION_DECL)
- LinkageName = GV->getName();
- DebugFactory.CreateGlobalVariable(findRegion(DECL_CONTEXT(decl)),
- DispName, DispName, LinkageName,
- getOrCreateFile(Loc.file), Loc.line,
- TyD, GV->hasInternalLinkage(),
- true/*definition*/, GV);
-}
-
-/// createBasicType - Create BasicType.
-DIType DebugInfo::createBasicType(tree type) {
-
- StringRef TypeName = GetNodeName(type);
- uint64_t Size = NodeSizeInBits(type);
- uint64_t Align = NodeAlignInBits(type);
-
- unsigned Encoding = 0;
-
- switch (TREE_CODE(type)) {
- case INTEGER_TYPE:
- if (TYPE_STRING_FLAG (type)) {
- if (TYPE_UNSIGNED (type))
- Encoding = DW_ATE_unsigned_char;
- else
- Encoding = DW_ATE_signed_char;
- }
- else if (TYPE_UNSIGNED (type))
- Encoding = DW_ATE_unsigned;
- else
- Encoding = DW_ATE_signed;
- break;
- case REAL_TYPE:
- Encoding = DW_ATE_float;
- break;
- case COMPLEX_TYPE:
- Encoding = TREE_CODE(TREE_TYPE(type)) == REAL_TYPE ?
- DW_ATE_complex_float : DW_ATE_lo_user;
- break;
- case BOOLEAN_TYPE:
- Encoding = DW_ATE_boolean;
- break;
- default: {
- DEBUGASSERT(0 && "Basic type case missing");
- Encoding = DW_ATE_signed;
- Size = BITS_PER_WORD;
- Align = BITS_PER_WORD;
- break;
- }
- }
-
- return
- DebugFactory.CreateBasicType(getOrCreateFile(main_input_filename),
- TypeName,
- getOrCreateFile(main_input_filename),
- 0, Size, Align,
- 0, 0, Encoding);
-}
-
-/// isArtificialArgumentType - Return true if arg_type represents artificial,
-/// i.e. "this" in c++, argument.
-static bool isArtificialArgumentType(tree arg_type, tree method_type) {
- if (TREE_CODE (method_type) != METHOD_TYPE) return false;
- if (TREE_CODE (arg_type) != POINTER_TYPE) return false;
- if (TREE_TYPE (arg_type) == TYPE_METHOD_BASETYPE (method_type))
- return true;
- if (TYPE_MAIN_VARIANT (TREE_TYPE (arg_type))
- && TYPE_MAIN_VARIANT (TREE_TYPE (arg_type)) != TREE_TYPE (arg_type)
- && (TYPE_MAIN_VARIANT (TREE_TYPE (arg_type))
- == TYPE_METHOD_BASETYPE (method_type)))
- return true;
- return false;
-}
-
-/// createMethodType - Create MethodType.
-DIType DebugInfo::createMethodType(tree type) {
-
- // Create a place holder type first. The may be used as a context
- // for the argument types.
- llvm::DIType FwdType = DebugFactory.CreateTemporaryType();
- llvm::MDNode *FTN = FwdType;
- llvm::TrackingVH<llvm::MDNode> FwdTypeNode = FTN;
- TypeCache[type] = WeakVH(FwdType);
- // Push the struct on region stack.
- RegionStack.push_back(WeakVH(FwdType));
- RegionMap[type] = WeakVH(FwdType);
-
- llvm::SmallVector<llvm::DIDescriptor, 16> EltTys;
-
- // Add the result type at least.
- EltTys.push_back(getOrCreateType(TREE_TYPE(type)));
-
- // Set up remainder of arguments.
- bool ProcessedFirstArg = false;
- for (tree arg = TYPE_ARG_TYPES(type); arg; arg = TREE_CHAIN(arg)) {
- tree formal_type = TREE_VALUE(arg);
- if (formal_type == void_type_node) break;
- llvm::DIType FormalType = getOrCreateType(formal_type);
- if (!ProcessedFirstArg && isArtificialArgumentType(formal_type, type)) {
- DIType AFormalType = DebugFactory.CreateArtificialType(FormalType);
- EltTys.push_back(AFormalType);
- } else
- EltTys.push_back(FormalType);
- if (!ProcessedFirstArg)
- ProcessedFirstArg = true;
- }
-
- llvm::DIArray EltTypeArray =
- DebugFactory.GetOrCreateArray(EltTys.data(), EltTys.size());
-
- RegionStack.pop_back();
- std::map<tree_node *, WeakVH>::iterator RI = RegionMap.find(type);
- if (RI != RegionMap.end())
- RegionMap.erase(RI);
-
- llvm::DIType RealType =
- DebugFactory.CreateCompositeType(llvm::dwarf::DW_TAG_subroutine_type,
- findRegion(TYPE_CONTEXT(type)),
- StringRef(),
- getOrCreateFile(main_input_filename),
- 0, 0, 0, 0, 0,
- llvm::DIType(), EltTypeArray);
-
- // Now that we have a real decl for the struct, replace anything using the
- // old decl with the new one. This will recursively update the debug info.
- llvm::DIType(FwdTypeNode).replaceAllUsesWith(RealType);
-
- return RealType;
-}
-
-/// createPointerType - Create PointerType.
-DIType DebugInfo::createPointerType(tree type) {
-
- DIType FromTy = getOrCreateType(TREE_TYPE(type));
- // type* and type&
- // FIXME: Should BLOCK_POINTER_TYP have its own DW_TAG?
- unsigned Tag = TREE_CODE(type) == REFERENCE_TYPE ?
- DW_TAG_reference_type: DW_TAG_pointer_type;
- unsigned Flags = 0;
-
- // Check if this pointer type has a name.
- if (tree TyName = TYPE_NAME(type))
- if (TREE_CODE(TyName) == TYPE_DECL && !DECL_ORIGINAL_TYPE(TyName)) {
- expanded_location TypeNameLoc = GetNodeLocation(TyName);
- DIType Ty =
- DebugFactory.CreateDerivedType(Tag, findRegion(DECL_CONTEXT(TyName)),
- GetNodeName(TyName),
- getOrCreateFile(TypeNameLoc.file),
- TypeNameLoc.line,
- 0 /*size*/,
- 0 /*align*/,
- 0 /*offset */,
- 0 /*flags*/,
- FromTy);
- TypeCache[TyName] = WeakVH(Ty);
- return Ty;
- }
-
- StringRef PName = FromTy.getName();
- DIType PTy =
- DebugFactory.CreateDerivedType(Tag, findRegion(TYPE_CONTEXT(type)),
- Tag == DW_TAG_pointer_type ?
- StringRef() : PName,
- getOrCreateFile(main_input_filename),
- 0 /*line no*/,
- NodeSizeInBits(type),
- NodeAlignInBits(type),
- 0 /*offset */,
- Flags,
- FromTy);
- return PTy;
-}
-
-/// createArrayType - Create ArrayType.
-DIType DebugInfo::createArrayType(tree type) {
-
- // type[n][m]...[p]
- if (TREE_CODE (type) == ARRAY_TYPE
- && TYPE_STRING_FLAG(type) && TREE_CODE(TREE_TYPE(type)) == INTEGER_TYPE){
- DEBUGASSERT(0 && "Don't support pascal strings");
- return DIType();
- }
-
- unsigned Tag = 0;
-
- if (TREE_CODE(type) == VECTOR_TYPE) {
- Tag = DW_TAG_vector_type;
- type = TREE_TYPE (TYPE_FIELDS (TYPE_DEBUG_REPRESENTATION_TYPE (type)));
- }
- else
- Tag = DW_TAG_array_type;
-
- // Add the dimensions of the array. FIXME: This loses CV qualifiers from
- // interior arrays, do we care? Why aren't nested arrays represented the
- // obvious/recursive way?
- llvm::SmallVector<llvm::DIDescriptor, 8> Subscripts;
-
- // There will be ARRAY_TYPE nodes for each rank. Followed by the derived
- // type.
- tree atype = type;
- tree EltTy = TREE_TYPE(atype);
- for (; TREE_CODE(atype) == ARRAY_TYPE;
- atype = TREE_TYPE(atype)) {
- tree Domain = TYPE_DOMAIN(atype);
- if (Domain) {
- // FIXME - handle dynamic ranges
- tree MinValue = TYPE_MIN_VALUE(Domain);
- tree MaxValue = TYPE_MAX_VALUE(Domain);
- uint64_t Low = 0;
- uint64_t Hi = 0;
- if (MinValue && isInt64(MinValue, 0))
- Low = getINTEGER_CSTVal(MinValue);
- if (MaxValue && isInt64(MaxValue, 0))
- Hi = getINTEGER_CSTVal(MaxValue);
- Subscripts.push_back(DebugFactory.GetOrCreateSubrange(Low, Hi));
- }
- EltTy = TREE_TYPE(atype);
- }
-
- llvm::DIArray SubscriptArray =
- DebugFactory.GetOrCreateArray(Subscripts.data(), Subscripts.size());
- expanded_location Loc = GetNodeLocation(type);
- return DebugFactory.CreateCompositeType(llvm::dwarf::DW_TAG_array_type,
- findRegion(TYPE_CONTEXT(type)),
- StringRef(),
- getOrCreateFile(Loc.file), 0,
- NodeSizeInBits(type),
- NodeAlignInBits(type), 0, 0,
- getOrCreateType(EltTy),
- SubscriptArray);
-}
-
-/// createEnumType - Create EnumType.
-DIType DebugInfo::createEnumType(tree type) {
- // enum { a, b, ..., z };
- llvm::SmallVector<llvm::DIDescriptor, 32> Elements;
-
- if (TYPE_SIZE(type)) {
- for (tree Link = TYPE_VALUES(type); Link; Link = TREE_CHAIN(Link)) {
- tree EnumValue = TREE_VALUE(Link);
- if (TREE_CODE(EnumValue) == CONST_DECL)
- EnumValue = DECL_INITIAL(EnumValue);
- int64_t Value = getINTEGER_CSTVal(EnumValue);
- const char *EnumName = IDENTIFIER_POINTER(TREE_PURPOSE(Link));
- Elements.push_back(DebugFactory.CreateEnumerator(EnumName, Value));
- }
- }
-
- llvm::DIArray EltArray =
- DebugFactory.GetOrCreateArray(Elements.data(), Elements.size());
-
- expanded_location Loc = { NULL, 0, 0, false };
- if (TYPE_SIZE(type))
- // Incomplete enums do not have any location info.
- Loc = GetNodeLocation(TREE_CHAIN(type), false);
-
- return DebugFactory.CreateCompositeType(llvm::dwarf::DW_TAG_enumeration_type,
- findRegion(TYPE_CONTEXT(type)),
- GetNodeName(type),
- getOrCreateFile(Loc.file),
- Loc.line,
- NodeSizeInBits(type),
- NodeAlignInBits(type), 0, 0,
- llvm::DIType(), EltArray);
-}
-
-/// createStructType - Create StructType for struct or union or class.
-DIType DebugInfo::createStructType(tree type) {
-
- // struct { a; b; ... z; }; | union { a; b; ... z; };
- unsigned Tag = TREE_CODE(type) == RECORD_TYPE ? DW_TAG_structure_type :
- DW_TAG_union_type;
-
- unsigned RunTimeLang = 0;
-//TODO if (TYPE_LANG_SPECIFIC (type)
-//TODO && lang_hooks.types.is_runtime_specific_type (type))
-//TODO {
-//TODO unsigned CULang = TheCU.getLanguage();
-//TODO switch (CULang) {
-//TODO case DW_LANG_ObjC_plus_plus :
-//TODO RunTimeLang = DW_LANG_ObjC_plus_plus;
-//TODO break;
-//TODO case DW_LANG_ObjC :
-//TODO RunTimeLang = DW_LANG_ObjC;
-//TODO break;
-//TODO case DW_LANG_C_plus_plus :
-//TODO RunTimeLang = DW_LANG_C_plus_plus;
-//TODO break;
-//TODO default:
-//TODO break;
-//TODO }
-//TODO }
-
- // Records and classes and unions can all be recursive. To handle them,
- // we first generate a debug descriptor for the struct as a forward
- // declaration. Then (if it is a definition) we go through and get debug
- // info for all of its members. Finally, we create a descriptor for the
- // complete type (which may refer to the forward decl if the struct is
- // recursive) and replace all uses of the forward declaration with the
- // final definition.
- expanded_location Loc = GetNodeLocation(TREE_CHAIN(type), false);
- unsigned SFlags = 0;
- DIDescriptor TyContext = findRegion(TYPE_CONTEXT(type));
-
- // Check if this type is created while creating context information
- // descriptor.
- std::map<tree_node *, WeakVH >::iterator I = TypeCache.find(type);
- if (I != TypeCache.end())
- if (MDNode *TN = dyn_cast_or_null<MDNode>(&*I->second))
- return DIType(TN);
-
- // forward declaration,
- if (TYPE_SIZE(type) == 0) {
- llvm::DICompositeType FwdDecl =
- DebugFactory.CreateCompositeType(Tag,
- TyContext,
- GetNodeName(type),
- getOrCreateFile(Loc.file),
- Loc.line,
- 0, 0, 0,
- SFlags | llvm::DIType::FlagFwdDecl,
- llvm::DIType(), llvm::DIArray(),
- RunTimeLang);
- return FwdDecl;
- }
-
- llvm::DIType FwdDecl = DebugFactory.CreateTemporaryType();
-
- // Insert into the TypeCache so that recursive uses will find it.
- llvm::MDNode *FDN = FwdDecl;
- llvm::TrackingVH<llvm::MDNode> FwdDeclNode = FDN;
- TypeCache[type] = WeakVH(FwdDecl);
-
- // Push the struct on region stack.
- RegionStack.push_back(WeakVH(FwdDecl));
- RegionMap[type] = WeakVH(FwdDecl);
-
- // Convert all the elements.
- llvm::SmallVector<llvm::DIDescriptor, 16> EltTys;
-
- if (tree binfo = TYPE_BINFO(type)) {
- VEC(tree,gc) *accesses = BINFO_BASE_ACCESSES (binfo);
-
- for (unsigned i = 0, e = BINFO_N_BASE_BINFOS(binfo); i != e; ++i) {
- tree BInfo = BINFO_BASE_BINFO(binfo, i);
- tree BInfoType = BINFO_TYPE (BInfo);
- DIType BaseClass = getOrCreateType(BInfoType);
- unsigned BFlags = 0;
- if (BINFO_VIRTUAL_P (BInfo))
- BFlags = llvm::DIType::FlagVirtual;
- if (accesses) {
- tree access = VEC_index (tree, accesses, i);
- if (access == access_protected_node)
- BFlags |= llvm::DIType::FlagProtected;
- else if (access == access_private_node)
- BFlags |= llvm::DIType::FlagPrivate;
- }
-
- // Check for zero BINFO_OFFSET.
- // FIXME : Is this correct ?
- unsigned Offset = BINFO_OFFSET(BInfo) ?
- getINTEGER_CSTVal(BINFO_OFFSET(BInfo))*8 : 0;
-
- if (BINFO_VIRTUAL_P (BInfo))
- Offset = 0 - getINTEGER_CSTVal(BINFO_VPTR_FIELD (BInfo));
- // FIXME : name, size, align etc...
- DIType DTy =
- DebugFactory.CreateDerivedType(DW_TAG_inheritance,
- findRegion(TYPE_CONTEXT(type)), StringRef(),
- llvm::DIFile(), 0,0,0,
- Offset,
- BFlags, BaseClass);
- EltTys.push_back(DTy);
- }
- }
-
- // Now add members of this class.
- for (tree Member = TYPE_FIELDS(type); Member;
- Member = TREE_CHAIN(Member)) {
- // Should we skip.
- if (DECL_P(Member) && DECL_IGNORED_P(Member)) continue;
-
- // Get the location of the member.
- expanded_location MemLoc = GetNodeLocation(Member, false);
-
- if (TREE_CODE(Member) != FIELD_DECL)
- // otherwise is a static variable, whose debug info is emitted
- // when through EmitGlobalVariable().
- continue;
-
- if (!OffsetIsLLVMCompatible(Member))
- // FIXME: field with variable or humongous offset.
- // Skip it for now.
- continue;
-
- /* Ignore nameless fields. */
- if (DECL_NAME (Member) == NULL_TREE
- && !(TREE_CODE (TREE_TYPE (Member)) == UNION_TYPE
- || TREE_CODE (TREE_TYPE (Member)) == RECORD_TYPE))
- continue;
-
- // Field type is the declared type of the field.
- tree FieldNodeType = FieldType(Member);
- DIType MemberType = getOrCreateType(FieldNodeType);
- StringRef MemberName = GetNodeName(Member);
- unsigned MFlags = 0;
- if (TREE_PROTECTED(Member))
- MFlags = llvm::DIType::FlagProtected;
- else if (TREE_PRIVATE(Member))
- MFlags = llvm::DIType::FlagPrivate;
-
- DIType DTy =
- DebugFactory.CreateDerivedType(DW_TAG_member,
- findRegion(DECL_CONTEXT(Member)),
- MemberName,
- getOrCreateFile(MemLoc.file),
- MemLoc.line, NodeSizeInBits(Member),
- NodeAlignInBits(FieldNodeType),
- int_bit_position(Member),
- MFlags, MemberType);
- EltTys.push_back(DTy);
- }
-
- for (tree Member = TYPE_METHODS(type); Member;
- Member = TREE_CHAIN(Member)) {
-
- if (DECL_ABSTRACT_ORIGIN (Member)) continue;
- // Ignore unused aritificial members.
- if (DECL_ARTIFICIAL (Member) && !TREE_USED (Member)) continue;
- // In C++, TEMPLATE_DECLs are marked Ignored, and should be.
- if (DECL_P (Member) && DECL_IGNORED_P (Member)) continue;
-
- std::map<tree_node *, WeakVH >::iterator I = SPCache.find(Member);
- if (I != SPCache.end())
- EltTys.push_back(DISubprogram(cast<MDNode>(I->second)));
- else {
- // Get the location of the member.
- expanded_location MemLoc = GetNodeLocation(Member, false);
- StringRef MemberName = getFunctionName(Member);
- StringRef LinkageName = getLinkageName(Member);
- DIType SPTy = getOrCreateType(TREE_TYPE(Member));
- unsigned Virtuality = 0;
- unsigned VIndex = 0;
- DIType ContainingType;
- if (DECL_VINDEX (Member)) {
- if (host_integerp (DECL_VINDEX (Member), 0))
- VIndex = tree_low_cst (DECL_VINDEX (Member), 0);
- Virtuality = dwarf::DW_VIRTUALITY_virtual;
- ContainingType = getOrCreateType(DECL_CONTEXT(Member));
- }
- DISubprogram SP =
- DebugFactory.CreateSubprogram(findRegion(DECL_CONTEXT(Member)),
- MemberName, MemberName,
- LinkageName,
- getOrCreateFile(MemLoc.file),
- MemLoc.line, SPTy, false, false,
- Virtuality, VIndex, ContainingType,
- DECL_ARTIFICIAL (Member), optimize);
- EltTys.push_back(SP);
- SPCache[Member] = WeakVH(SP);
- }
- }
-
- llvm::DIArray Elements =
- DebugFactory.GetOrCreateArray(EltTys.data(), EltTys.size());
-
- RegionStack.pop_back();
- std::map<tree_node *, WeakVH>::iterator RI = RegionMap.find(type);
- if (RI != RegionMap.end())
- RegionMap.erase(RI);
-
- llvm::DIType ContainingType;
- if (TYPE_VFIELD (type)) {
- tree vtype = DECL_FCONTEXT (TYPE_VFIELD (type));
- ContainingType = getOrCreateType(vtype);
- }
- llvm::DICompositeType RealDecl =
- DebugFactory.CreateCompositeType(Tag, findRegion(TYPE_CONTEXT(type)),
- GetNodeName(type),
- getOrCreateFile(Loc.file),
- Loc.line,
- NodeSizeInBits(type), NodeAlignInBits(type),
- 0, SFlags, llvm::DIType(), Elements,
- RunTimeLang, ContainingType);
- RegionMap[type] = WeakVH(RealDecl);
-
- // Now that we have a real decl for the struct, replace anything using the
- // old decl with the new one. This will recursively update the debug info.
- llvm::DIType(FwdDeclNode).replaceAllUsesWith(RealDecl);
-
- return RealDecl;
-}
-
-/// createVarinatType - Create variant type or return MainTy.
-DIType DebugInfo::createVariantType(tree type, DIType MainTy) {
-
- DIType Ty;
- if (tree TyDef = TYPE_NAME(type)) {
- std::map<tree_node *, WeakVH >::iterator I = TypeCache.find(TyDef);
- if (I != TypeCache.end())
- if (Value *M = I->second)
- return DIType(cast<MDNode>(M));
- if (TREE_CODE(TyDef) == TYPE_DECL && DECL_ORIGINAL_TYPE(TyDef)) {
- expanded_location TypeDefLoc = GetNodeLocation(TyDef);
- Ty = DebugFactory.CreateDerivedType(DW_TAG_typedef,
- findRegion(DECL_CONTEXT(TyDef)),
- GetNodeName(TyDef),
- getOrCreateFile(TypeDefLoc.file),
- TypeDefLoc.line,
- 0 /*size*/,
- 0 /*align*/,
- 0 /*offset */,
- 0 /*flags*/,
- MainTy);
- TypeCache[TyDef] = WeakVH(Ty);
- return Ty;
- }
- }
-
- if (TYPE_VOLATILE(type)) {
- Ty = DebugFactory.CreateDerivedType(DW_TAG_volatile_type,
- findRegion(TYPE_CONTEXT(type)),
- StringRef(),
- getOrCreateFile(main_input_filename),
- 0 /*line no*/,
- NodeSizeInBits(type),
- NodeAlignInBits(type),
- 0 /*offset */,
- 0 /* flags */,
- MainTy);
- MainTy = Ty;
- }
-
- if (TYPE_READONLY(type))
- Ty = DebugFactory.CreateDerivedType(DW_TAG_const_type,
- findRegion(TYPE_CONTEXT(type)),
- StringRef(),
- getOrCreateFile(main_input_filename),
- 0 /*line no*/,
- NodeSizeInBits(type),
- NodeAlignInBits(type),
- 0 /*offset */,
- 0 /* flags */,
- MainTy);
-
- if (TYPE_VOLATILE(type) || TYPE_READONLY(type)) {
- TypeCache[type] = WeakVH(Ty);
- return Ty;
- }
-
- // If, for some reason, main type varaint type is seen then use it.
- return MainTy;
-}
-
-/// getOrCreateType - Get the type from the cache or create a new type if
-/// necessary.
-DIType DebugInfo::getOrCreateType(tree type) {
- DEBUGASSERT(type != NULL_TREE && type != error_mark_node &&
- "Not a type.");
- if (type == NULL_TREE || type == error_mark_node) return DIType();
-
- // Should only be void if a pointer/reference/return type. Returning NULL
- // allows the caller to produce a non-derived type.
- if (TREE_CODE(type) == VOID_TYPE) return DIType();
-
- // Check to see if the compile unit already has created this type.
- std::map<tree_node *, WeakVH >::iterator I = TypeCache.find(type);
- if (I != TypeCache.end())
- if (Value *M = I->second)
- return DIType(cast<MDNode>(M));
-
- DIType MainTy;
- if (type != TYPE_MAIN_VARIANT(type) && TYPE_MAIN_VARIANT(type))
- MainTy = getOrCreateType(TYPE_MAIN_VARIANT(type));
-
- DIType Ty = createVariantType(type, MainTy);
- if (Ty.isValid())
- return Ty;
-
- // Work out details of type.
- switch (TREE_CODE(type)) {
- case ERROR_MARK:
- case LANG_TYPE:
- case TRANSLATION_UNIT_DECL:
- default: {
- DEBUGASSERT(0 && "Unsupported type");
- return DIType();
- }
-
- case POINTER_TYPE:
- case REFERENCE_TYPE:
- // Do not cache pointer type. The pointer may point to forward declared
- // struct.
- return createPointerType(type);
- break;
-
- case OFFSET_TYPE: {
- // gen_type_die(TYPE_OFFSET_BASETYPE(type), context_die);
- // gen_type_die(TREE_TYPE(type), context_die);
- // gen_ptr_to_mbr_type_die(type, context_die);
- // PR 7104
- break;
- }
-
- case FUNCTION_TYPE:
- case METHOD_TYPE:
- Ty = createMethodType(type);
- break;
-
- case VECTOR_TYPE:
- case ARRAY_TYPE:
- Ty = createArrayType(type);
- break;
-
- case ENUMERAL_TYPE:
- Ty = createEnumType(type);
- break;
-
- case RECORD_TYPE:
- case QUAL_UNION_TYPE:
- case UNION_TYPE:
- return createStructType(type);
- break;
-
- case INTEGER_TYPE:
- case REAL_TYPE:
- case COMPLEX_TYPE:
- case BOOLEAN_TYPE:
- Ty = createBasicType(type);
- break;
- }
- TypeCache[type] = WeakVH(Ty);
- return Ty;
-}
-
-/// Initialize - Initialize debug info by creating compile unit for
-/// main_input_filename. This must be invoked after language dependent
-/// initialization is done.
-void DebugInfo::Initialize() {
-
- // Each input file is encoded as a separate compile unit in LLVM
- // debugging information output. However, many target specific tool chains
- // prefer to encode only one compile unit in an object file. In this
- // situation, the LLVM code generator will include debugging information
- // entities in the compile unit that is marked as main compile unit. The
- // code generator accepts maximum one main compile unit per module. If a
- // module does not contain any main compile unit then the code generator
- // will emit multiple compile units in the output object file.
- if (!strcmp (main_input_filename, ""))
- TheCU = getOrCreateCompileUnit("<stdin>", true);
- else
- TheCU = getOrCreateCompileUnit(main_input_filename, true);
-}
-
-/// getOrCreateCompileUnit - Get the compile unit from the cache or
-/// create a new one if necessary.
-DICompileUnit DebugInfo::getOrCreateCompileUnit(const char *FullPath,
- bool isMain) {
- if (!FullPath) {
- if (!strcmp (main_input_filename, ""))
- FullPath = "<stdin>";
- else
- FullPath = main_input_filename;
- }
-
- // Get source file information.
- std::string Directory;
- std::string FileName;
- DirectoryAndFile(FullPath, Directory, FileName);
-
- // Set up Language number.
- unsigned LangTag;
- const std::string LanguageName(lang_hooks.name);
- if (LanguageName == "GNU C")
- LangTag = DW_LANG_C89;
- else if (LanguageName == "GNU C++")
- LangTag = DW_LANG_C_plus_plus;
- else if (LanguageName == "GNU Ada")
- LangTag = DW_LANG_Ada95;
- else if (LanguageName == "GNU F77")
- LangTag = DW_LANG_Fortran77;
- else if (LanguageName == "GNU Pascal")
- LangTag = DW_LANG_Pascal83;
- else if (LanguageName == "GNU Java")
- LangTag = DW_LANG_Java;
- else if (LanguageName == "GNU Objective-C")
- LangTag = DW_LANG_ObjC;
- else if (LanguageName == "GNU Objective-C++")
- LangTag = DW_LANG_ObjC_plus_plus;
- else
- LangTag = DW_LANG_C89;
-
- StringRef Flags;
-
- // flag_objc_abi represents Objective-C runtime version number. It is zero
- // for all other language.
- unsigned ObjcRunTimeVer = 0;
-// if (flag_objc_abi != 0 && flag_objc_abi != -1)
-// ObjcRunTimeVer = flag_objc_abi;
- return DebugFactory.CreateCompileUnit(LangTag, FileName.c_str(),
- Directory.c_str(),
- version_string, isMain,
- optimize, Flags,
- ObjcRunTimeVer);
-}
-
-/// getOrCreateFile - Get DIFile descriptor.
-DIFile DebugInfo::getOrCreateFile(const char *FullPath) {
- if (!FullPath) {
- if (!strcmp (main_input_filename, ""))
- FullPath = "<stdin>";
- else
- FullPath = main_input_filename;
- }
-
- // Get source file information.
- std::string Directory;
- std::string FileName;
- DirectoryAndFile(FullPath, Directory, FileName);
- return DebugFactory.CreateFile(FileName, Directory, TheCU);
-}
-
-//===----------------------------------------------------------------------===//
-// DIFactory: Basic Helpers
-//===----------------------------------------------------------------------===//
-
-DIFactory::DIFactory(Module &m)
- : M(m), VMContext(M.getContext()), DeclareFn(0), ValueFn(0) {}
-
-Constant *DIFactory::GetTagConstant(unsigned TAG) {
- assert((TAG & LLVMDebugVersionMask) == 0 &&
- "Tag too large for debug encoding!");
- return ConstantInt::get(Type::getInt32Ty(VMContext), TAG | LLVMDebugVersion);
-}
-
-//===----------------------------------------------------------------------===//
-// DIFactory: Primary Constructors
-//===----------------------------------------------------------------------===//
-
-/// GetOrCreateArray - Create an descriptor for an array of descriptors.
-/// This implicitly uniques the arrays created.
-DIArray DIFactory::GetOrCreateArray(DIDescriptor *Tys, unsigned NumTys) {
- if (NumTys == 0) {
- Value *Null = llvm::Constant::getNullValue(Type::getInt32Ty(VMContext));
- return DIArray(MDNode::get(VMContext, &Null, 1));
- }
-
- SmallVector<Value *, 16> Elts(Tys, Tys+NumTys);
- return DIArray(MDNode::get(VMContext, Elts.data(), Elts.size()));
-}
-
-/// GetOrCreateSubrange - Create a descriptor for a value range. This
-/// implicitly uniques the values returned.
-DISubrange DIFactory::GetOrCreateSubrange(int64_t Lo, int64_t Hi) {
- Value *Elts[] = {
- GetTagConstant(dwarf::DW_TAG_subrange_type),
- ConstantInt::get(Type::getInt64Ty(VMContext), Lo),
- ConstantInt::get(Type::getInt64Ty(VMContext), Hi)
- };
-
- return DISubrange(MDNode::get(VMContext, &Elts[0], 3));
-}
-
-/// CreateUnspecifiedParameter - Create unspeicified type descriptor
-/// for the subroutine type.
-DIDescriptor DIFactory::CreateUnspecifiedParameter() {
- Value *Elts[] = {
- GetTagConstant(dwarf::DW_TAG_unspecified_parameters)
- };
- return DIDescriptor(MDNode::get(VMContext, &Elts[0], 1));
-}
-
-/// CreateCompileUnit - Create a new descriptor for the specified compile
-/// unit. Note that this does not unique compile units within the module.
-DICompileUnit DIFactory::CreateCompileUnit(unsigned LangID,
- StringRef Filename,
- StringRef Directory,
- StringRef Producer,
- bool isMain,
- bool isOptimized,
- StringRef Flags,
- unsigned RunTimeVer) {
- Value *Elts[] = {
- GetTagConstant(dwarf::DW_TAG_compile_unit),
- llvm::Constant::getNullValue(Type::getInt32Ty(VMContext)),
- ConstantInt::get(Type::getInt32Ty(VMContext), LangID),
- MDString::get(VMContext, Filename),
- MDString::get(VMContext, Directory),
- MDString::get(VMContext, Producer),
- ConstantInt::get(Type::getInt1Ty(VMContext), isMain),
- ConstantInt::get(Type::getInt1Ty(VMContext), isOptimized),
- MDString::get(VMContext, Flags),
- ConstantInt::get(Type::getInt32Ty(VMContext), RunTimeVer)
- };
-
- return DICompileUnit(MDNode::get(VMContext, &Elts[0], 10));
-}
-
-/// CreateFile - Create a new descriptor for the specified file.
-DIFile DIFactory::CreateFile(StringRef Filename,
- StringRef Directory,
- DICompileUnit CU) {
- Value *Elts[] = {
- GetTagConstant(dwarf::DW_TAG_file_type),
- MDString::get(VMContext, Filename),
- MDString::get(VMContext, Directory),
- CU
- };
-
- return DIFile(MDNode::get(VMContext, &Elts[0], 4));
-}
-
-/// CreateEnumerator - Create a single enumerator value.
-DIEnumerator DIFactory::CreateEnumerator(StringRef Name, uint64_t Val){
- Value *Elts[] = {
- GetTagConstant(dwarf::DW_TAG_enumerator),
- MDString::get(VMContext, Name),
- ConstantInt::get(Type::getInt64Ty(VMContext), Val)
- };
- return DIEnumerator(MDNode::get(VMContext, &Elts[0], 3));
-}
-
-
-/// CreateBasicType - Create a basic type like int, float, etc.
-DIBasicType DIFactory::CreateBasicType(DIDescriptor Context,
- StringRef Name,
- DIFile F,
- unsigned LineNumber,
- uint64_t SizeInBits,
- uint64_t AlignInBits,
- uint64_t OffsetInBits, unsigned Flags,
- unsigned Encoding) {
- Value *Elts[] = {
- GetTagConstant(dwarf::DW_TAG_base_type),
- Context,
- MDString::get(VMContext, Name),
- F,
- ConstantInt::get(Type::getInt32Ty(VMContext), LineNumber),
- ConstantInt::get(Type::getInt64Ty(VMContext), SizeInBits),
- ConstantInt::get(Type::getInt64Ty(VMContext), AlignInBits),
- ConstantInt::get(Type::getInt64Ty(VMContext), OffsetInBits),
- ConstantInt::get(Type::getInt32Ty(VMContext), Flags),
- ConstantInt::get(Type::getInt32Ty(VMContext), Encoding)
- };
- return DIBasicType(MDNode::get(VMContext, &Elts[0], 10));
-}
-
-
-/// CreateBasicType - Create a basic type like int, float, etc.
-DIBasicType DIFactory::CreateBasicTypeEx(DIDescriptor Context,
- StringRef Name,
- DIFile F,
- unsigned LineNumber,
- Constant *SizeInBits,
- Constant *AlignInBits,
- Constant *OffsetInBits, unsigned Flags,
- unsigned Encoding) {
- Value *Elts[] = {
- GetTagConstant(dwarf::DW_TAG_base_type),
- Context,
- MDString::get(VMContext, Name),
- F,
- ConstantInt::get(Type::getInt32Ty(VMContext), LineNumber),
- SizeInBits,
- AlignInBits,
- OffsetInBits,
- ConstantInt::get(Type::getInt32Ty(VMContext), Flags),
- ConstantInt::get(Type::getInt32Ty(VMContext), Encoding)
- };
- return DIBasicType(MDNode::get(VMContext, &Elts[0], 10));
-}
-
-/// CreateArtificialType - Create a new DIType with "artificial" flag set.
-DIType DIFactory::CreateArtificialType(DIType Ty) {
- if (Ty.isArtificial())
- return Ty;
-
- SmallVector<Value *, 9> Elts;
- MDNode *N = Ty;
- assert (N && "Unexpected input DIType!");
- for (unsigned i = 0, e = N->getNumOperands(); i != e; ++i) {
- if (Value *V = N->getOperand(i))
- Elts.push_back(V);
- else
- Elts.push_back(Constant::getNullValue(Type::getInt32Ty(VMContext)));
- }
-
- unsigned CurFlags = Ty.getFlags();
- CurFlags = CurFlags | DIType::FlagArtificial;
-
- // Flags are stored at this slot.
- Elts[8] = ConstantInt::get(Type::getInt32Ty(VMContext), CurFlags);
-
- return DIType(MDNode::get(VMContext, Elts.data(), Elts.size()));
-}
-
-/// CreateDerivedType - Create a derived type like const qualified type,
-/// pointer, typedef, etc.
-DIDerivedType DIFactory::CreateDerivedType(unsigned Tag,
- DIDescriptor Context,
- StringRef Name,
- DIFile F,
- unsigned LineNumber,
- uint64_t SizeInBits,
- uint64_t AlignInBits,
- uint64_t OffsetInBits,
- unsigned Flags,
- DIType DerivedFrom) {
- Value *Elts[] = {
- GetTagConstant(Tag),
- Context,
- MDString::get(VMContext, Name),
- F,
- ConstantInt::get(Type::getInt32Ty(VMContext), LineNumber),
- ConstantInt::get(Type::getInt64Ty(VMContext), SizeInBits),
- ConstantInt::get(Type::getInt64Ty(VMContext), AlignInBits),
- ConstantInt::get(Type::getInt64Ty(VMContext), OffsetInBits),
- ConstantInt::get(Type::getInt32Ty(VMContext), Flags),
- DerivedFrom,
- };
- return DIDerivedType(MDNode::get(VMContext, &Elts[0], 10));
-}
-
-
-/// CreateDerivedType - Create a derived type like const qualified type,
-/// pointer, typedef, etc.
-DIDerivedType DIFactory::CreateDerivedTypeEx(unsigned Tag,
- DIDescriptor Context,
- StringRef Name,
- DIFile F,
- unsigned LineNumber,
- Constant *SizeInBits,
- Constant *AlignInBits,
- Constant *OffsetInBits,
- unsigned Flags,
- DIType DerivedFrom) {
- Value *Elts[] = {
- GetTagConstant(Tag),
- Context,
- MDString::get(VMContext, Name),
- F,
- ConstantInt::get(Type::getInt32Ty(VMContext), LineNumber),
- SizeInBits,
- AlignInBits,
- OffsetInBits,
- ConstantInt::get(Type::getInt32Ty(VMContext), Flags),
- DerivedFrom,
- };
- return DIDerivedType(MDNode::get(VMContext, &Elts[0], 10));
-}
-
-
-/// CreateCompositeType - Create a composite type like array, struct, etc.
-DICompositeType DIFactory::CreateCompositeType(unsigned Tag,
- DIDescriptor Context,
- StringRef Name,
- DIFile F,
- unsigned LineNumber,
- uint64_t SizeInBits,
- uint64_t AlignInBits,
- uint64_t OffsetInBits,
- unsigned Flags,
- DIType DerivedFrom,
- DIArray Elements,
- unsigned RuntimeLang,
- MDNode *ContainingType) {
-
- Value *Elts[] = {
- GetTagConstant(Tag),
- Context,
- MDString::get(VMContext, Name),
- F,
- ConstantInt::get(Type::getInt32Ty(VMContext), LineNumber),
- ConstantInt::get(Type::getInt64Ty(VMContext), SizeInBits),
- ConstantInt::get(Type::getInt64Ty(VMContext), AlignInBits),
- ConstantInt::get(Type::getInt64Ty(VMContext), OffsetInBits),
- ConstantInt::get(Type::getInt32Ty(VMContext), Flags),
- DerivedFrom,
- Elements,
- ConstantInt::get(Type::getInt32Ty(VMContext), RuntimeLang),
- ContainingType
- };
-
- MDNode *Node = MDNode::get(VMContext, &Elts[0], 13);
- // Create a named metadata so that we do not lose this enum info.
- if (Tag == dwarf::DW_TAG_enumeration_type) {
- NamedMDNode *NMD = M.getOrInsertNamedMetadata("llvm.dbg.enum");
- NMD->addOperand(Node);
- }
- return DICompositeType(Node);
-}
-
-/// CreateTemporaryType - Create a temporary forward-declared type.
-DIType DIFactory::CreateTemporaryType() {
- // Give the temporary MDNode a tag. It doesn't matter what tag we
- // use here as long as DIType accepts it.
- Value *Elts[] = {
- GetTagConstant(DW_TAG_base_type)
- };
- MDNode *Node = MDNode::getTemporary(VMContext, Elts, array_lengthof(Elts));
- return DIType(Node);
-}
-
-/// CreateTemporaryType - Create a temporary forward-declared type.
-DIType DIFactory::CreateTemporaryType(DIFile F) {
- // Give the temporary MDNode a tag. It doesn't matter what tag we
- // use here as long as DIType accepts it.
- Value *Elts[] = {
- GetTagConstant(DW_TAG_base_type),
- F.getCompileUnit(),
- NULL,
- F
- };
- MDNode *Node = MDNode::getTemporary(VMContext, Elts, array_lengthof(Elts));
- return DIType(Node);
-}
-
-/// CreateCompositeType - Create a composite type like array, struct, etc.
-DICompositeType DIFactory::CreateCompositeTypeEx(unsigned Tag,
- DIDescriptor Context,
- StringRef Name,
- DIFile F,
- unsigned LineNumber,
- Constant *SizeInBits,
- Constant *AlignInBits,
- Constant *OffsetInBits,
- unsigned Flags,
- DIType DerivedFrom,
- DIArray Elements,
- unsigned RuntimeLang,
- MDNode *ContainingType) {
- Value *Elts[] = {
- GetTagConstant(Tag),
- Context,
- MDString::get(VMContext, Name),
- F,
- ConstantInt::get(Type::getInt32Ty(VMContext), LineNumber),
- SizeInBits,
- AlignInBits,
- OffsetInBits,
- ConstantInt::get(Type::getInt32Ty(VMContext), Flags),
- DerivedFrom,
- Elements,
- ConstantInt::get(Type::getInt32Ty(VMContext), RuntimeLang),
- ContainingType
- };
- MDNode *Node = MDNode::get(VMContext, &Elts[0], 13);
- // Create a named metadata so that we do not lose this enum info.
- if (Tag == dwarf::DW_TAG_enumeration_type) {
- NamedMDNode *NMD = M.getOrInsertNamedMetadata("llvm.dbg.enum");
- NMD->addOperand(Node);
- }
- return DICompositeType(Node);
-}
-
-
-/// CreateSubprogram - Create a new descriptor for the specified subprogram.
-/// See comments in DISubprogram for descriptions of these fields. This
-/// method does not unique the generated descriptors.
-DISubprogram DIFactory::CreateSubprogram(DIDescriptor Context,
- StringRef Name,
- StringRef DisplayName,
- StringRef LinkageName,
- DIFile F,
- unsigned LineNo, DIType Ty,
- bool isLocalToUnit,
- bool isDefinition,
- unsigned VK, unsigned VIndex,
- DIType ContainingType,
- unsigned Flags,
- bool isOptimized,
- Function *Fn) {
-
- Value *Elts[] = {
- GetTagConstant(dwarf::DW_TAG_subprogram),
- llvm::Constant::getNullValue(Type::getInt32Ty(VMContext)),
- Context,
- MDString::get(VMContext, Name),
- MDString::get(VMContext, DisplayName),
- MDString::get(VMContext, LinkageName),
- F,
- ConstantInt::get(Type::getInt32Ty(VMContext), LineNo),
- Ty,
- ConstantInt::get(Type::getInt1Ty(VMContext), isLocalToUnit),
- ConstantInt::get(Type::getInt1Ty(VMContext), isDefinition),
- ConstantInt::get(Type::getInt32Ty(VMContext), (unsigned)VK),
- ConstantInt::get(Type::getInt32Ty(VMContext), VIndex),
- ContainingType,
- ConstantInt::get(Type::getInt32Ty(VMContext), Flags),
- ConstantInt::get(Type::getInt1Ty(VMContext), isOptimized),
- Fn
- };
- MDNode *Node = MDNode::get(VMContext, &Elts[0], 17);
-
- // Create a named metadata so that we do not lose this mdnode.
- NamedMDNode *NMD = M.getOrInsertNamedMetadata("llvm.dbg.sp");
- NMD->addOperand(Node);
- return DISubprogram(Node);
-}
-
-/// CreateSubprogramDefinition - Create new subprogram descriptor for the
-/// given declaration.
-DISubprogram DIFactory::CreateSubprogramDefinition(DISubprogram &SPDeclaration){
- if (SPDeclaration.isDefinition())
- return DISubprogram(SPDeclaration);
-
- MDNode *DeclNode = SPDeclaration;
- Value *Elts[] = {
- GetTagConstant(dwarf::DW_TAG_subprogram),
- llvm::Constant::getNullValue(Type::getInt32Ty(VMContext)),
- DeclNode->getOperand(2), // Context
- DeclNode->getOperand(3), // Name
- DeclNode->getOperand(4), // DisplayName
- DeclNode->getOperand(5), // LinkageName
- DeclNode->getOperand(6), // CompileUnit
- DeclNode->getOperand(7), // LineNo
- DeclNode->getOperand(8), // Type
- DeclNode->getOperand(9), // isLocalToUnit
- ConstantInt::get(Type::getInt1Ty(VMContext), true),
- DeclNode->getOperand(11), // Virtuality
- DeclNode->getOperand(12), // VIndex
- DeclNode->getOperand(13), // Containting Type
- DeclNode->getOperand(14), // Flags
- DeclNode->getOperand(15), // isOptimized
- SPDeclaration.getFunction()
- };
- MDNode *Node =MDNode::get(VMContext, &Elts[0], 16);
-
- // Create a named metadata so that we do not lose this mdnode.
- NamedMDNode *NMD = M.getOrInsertNamedMetadata("llvm.dbg.sp");
- NMD->addOperand(Node);
- return DISubprogram(Node);
-}
-
-/// CreateGlobalVariable - Create a new descriptor for the specified global.
-DIGlobalVariable
-DIFactory::CreateGlobalVariable(DIDescriptor Context, StringRef Name,
- StringRef DisplayName,
- StringRef LinkageName,
- DIFile F,
- unsigned LineNo, DIType Ty,bool isLocalToUnit,
- bool isDefinition, llvm::GlobalVariable *Val) {
- Value *Elts[] = {
- GetTagConstant(dwarf::DW_TAG_variable),
- llvm::Constant::getNullValue(Type::getInt32Ty(VMContext)),
- Context,
- MDString::get(VMContext, Name),
- MDString::get(VMContext, DisplayName),
- MDString::get(VMContext, LinkageName),
- F,
- ConstantInt::get(Type::getInt32Ty(VMContext), LineNo),
- Ty,
- ConstantInt::get(Type::getInt1Ty(VMContext), isLocalToUnit),
- ConstantInt::get(Type::getInt1Ty(VMContext), isDefinition),
- Val
- };
-
- Value *const *Vs = &Elts[0];
- MDNode *Node = MDNode::get(VMContext,Vs, 12);
-
- // Create a named metadata so that we do not lose this mdnode.
- NamedMDNode *NMD = M.getOrInsertNamedMetadata("llvm.dbg.gv");
- NMD->addOperand(Node);
-
- return DIGlobalVariable(Node);
-}
-
-/// CreateGlobalVariable - Create a new descriptor for the specified constant.
-DIGlobalVariable
-DIFactory::CreateGlobalVariable(DIDescriptor Context, StringRef Name,
- StringRef DisplayName,
- StringRef LinkageName,
- DIFile F,
- unsigned LineNo, DIType Ty,bool isLocalToUnit,
- bool isDefinition, llvm::Constant *Val) {
- Value *Elts[] = {
- GetTagConstant(dwarf::DW_TAG_variable),
- llvm::Constant::getNullValue(Type::getInt32Ty(VMContext)),
- Context,
- MDString::get(VMContext, Name),
- MDString::get(VMContext, DisplayName),
- MDString::get(VMContext, LinkageName),
- F,
- ConstantInt::get(Type::getInt32Ty(VMContext), LineNo),
- Ty,
- ConstantInt::get(Type::getInt1Ty(VMContext), isLocalToUnit),
- ConstantInt::get(Type::getInt1Ty(VMContext), isDefinition),
- Val
- };
-
- Value *const *Vs = &Elts[0];
- MDNode *Node = MDNode::get(VMContext,Vs, 12);
-
- // Create a named metadata so that we do not lose this mdnode.
- NamedMDNode *NMD = M.getOrInsertNamedMetadata("llvm.dbg.gv");
- NMD->addOperand(Node);
-
- return DIGlobalVariable(Node);
-}
-
-/// CreateVariable - Create a new descriptor for the specified variable.
-DIVariable DIFactory::CreateVariable(unsigned Tag, DIDescriptor Context,
- StringRef Name,
- DIFile F,
- unsigned LineNo,
- DIType Ty, bool AlwaysPreserve,
- unsigned Flags) {
- Value *Elts[] = {
- GetTagConstant(Tag),
- Context,
- MDString::get(VMContext, Name),
- F,
- ConstantInt::get(Type::getInt32Ty(VMContext), LineNo),
- Ty,
- ConstantInt::get(Type::getInt32Ty(VMContext), Flags)
- };
- MDNode *Node = MDNode::get(VMContext, &Elts[0], 7);
- if (AlwaysPreserve) {
- // The optimizer may remove local variable. If there is an interest
- // to preserve variable info in such situation then stash it in a
- // named mdnode.
- DISubprogram Fn(getDISubprogram(Context));
- StringRef FName = "fn";
- if (Fn.getFunction())
- FName = Fn.getFunction()->getName();
- char One = '\1';
- if (FName.startswith(StringRef(&One, 1)))
- FName = FName.substr(1);
-
-
- NamedMDNode *FnLocals = getOrInsertFnSpecificMDNode(M, FName);
- FnLocals->addOperand(Node);
- }
- return DIVariable(Node);
-}
-
-
-/// CreateComplexVariable - Create a new descriptor for the specified variable
-/// which has a complex address expression for its address.
-DIVariable DIFactory::CreateComplexVariable(unsigned Tag, DIDescriptor Context,
- StringRef Name, DIFile F,
- unsigned LineNo,
- DIType Ty, Value *const *Addr,
- unsigned NumAddr) {
- SmallVector<Value *, 15> Elts;
- Elts.push_back(GetTagConstant(Tag));
- Elts.push_back(Context);
- Elts.push_back(MDString::get(VMContext, Name));
- Elts.push_back(F);
- Elts.push_back(ConstantInt::get(Type::getInt32Ty(VMContext), LineNo));
- Elts.push_back(Ty);
- Elts.append(Addr, Addr+NumAddr);
-
- return DIVariable(MDNode::get(VMContext, Elts.data(), Elts.size()));
-}
-
-
-/// CreateBlock - This creates a descriptor for a lexical block with the
-/// specified parent VMContext.
-DILexicalBlock DIFactory::CreateLexicalBlock(DIDescriptor Context,
- DIFile F, unsigned LineNo,
- unsigned Col) {
- // Defeat MDNode uniqing for lexical blocks.
- static unsigned int unique_id = 0;
- Value *Elts[] = {
- GetTagConstant(dwarf::DW_TAG_lexical_block),
- Context,
- ConstantInt::get(Type::getInt32Ty(VMContext), LineNo),
- ConstantInt::get(Type::getInt32Ty(VMContext), Col),
- F,
- ConstantInt::get(Type::getInt32Ty(VMContext), unique_id++)
- };
- return DILexicalBlock(MDNode::get(VMContext, &Elts[0], 6));
-}
-
-/// CreateNameSpace - This creates new descriptor for a namespace
-/// with the specified parent context.
-DINameSpace DIFactory::CreateNameSpace(DIDescriptor Context, StringRef Name,
- DIFile F,
- unsigned LineNo) {
- Value *Elts[] = {
- GetTagConstant(dwarf::DW_TAG_namespace),
- Context,
- MDString::get(VMContext, Name),
- F,
- ConstantInt::get(Type::getInt32Ty(VMContext), LineNo)
- };
- return DINameSpace(MDNode::get(VMContext, &Elts[0], 5));
-}
-
-/// CreateLocation - Creates a debug info location.
-DILocation DIFactory::CreateLocation(unsigned LineNo, unsigned ColumnNo,
- DIScope S, DILocation OrigLoc) {
- Value *Elts[] = {
- ConstantInt::get(Type::getInt32Ty(VMContext), LineNo),
- ConstantInt::get(Type::getInt32Ty(VMContext), ColumnNo),
- S,
- OrigLoc,
- };
- return DILocation(MDNode::get(VMContext, &Elts[0], 4));
-}
-
-//===----------------------------------------------------------------------===//
-// DIFactory: Routines for inserting code into a function
-//===----------------------------------------------------------------------===//
-
-/// InsertDeclare - Insert a new llvm.dbg.declare intrinsic call.
-Instruction *DIFactory::InsertDeclare(Value *Storage, DIVariable D,
- Instruction *InsertBefore) {
- assert(Storage && "no storage passed to dbg.declare");
- assert(D.Verify() && "empty DIVariable passed to dbg.declare");
- if (!DeclareFn)
- DeclareFn = Intrinsic::getDeclaration(&M, Intrinsic::dbg_declare);
-
- Value *Args[] = { MDNode::get(Storage->getContext(), &Storage, 1),
- D };
- return CallInst::Create(DeclareFn, Args, Args+2, "", InsertBefore);
-}
-
-/// InsertDeclare - Insert a new llvm.dbg.declare intrinsic call.
-Instruction *DIFactory::InsertDeclare(Value *Storage, DIVariable D,
- BasicBlock *InsertAtEnd) {
- assert(Storage && "no storage passed to dbg.declare");
- assert(D.Verify() && "invalid DIVariable passed to dbg.declare");
- if (!DeclareFn)
- DeclareFn = Intrinsic::getDeclaration(&M, Intrinsic::dbg_declare);
-
- Value *Args[] = { MDNode::get(Storage->getContext(), &Storage, 1),
- D };
-
- // If this block already has a terminator then insert this intrinsic
- // before the terminator.
- if (TerminatorInst *T = InsertAtEnd->getTerminator())
- return CallInst::Create(DeclareFn, Args, Args+2, "", T);
- else
- return CallInst::Create(DeclareFn, Args, Args+2, "", InsertAtEnd);}
-
-/// InsertDbgValueIntrinsic - Insert a new llvm.dbg.value intrinsic call.
-Instruction *DIFactory::InsertDbgValueIntrinsic(Value *V, uint64_t Offset,
- DIVariable D,
- Instruction *InsertBefore) {
- assert(V && "no value passed to dbg.value");
- assert(D.Verify() && "invalid DIVariable passed to dbg.value");
- if (!ValueFn)
- ValueFn = Intrinsic::getDeclaration(&M, Intrinsic::dbg_value);
-
- Value *Args[] = { MDNode::get(V->getContext(), &V, 1),
- ConstantInt::get(Type::getInt64Ty(V->getContext()), Offset),
- D };
- return CallInst::Create(ValueFn, Args, Args+3, "", InsertBefore);
-}
-
-/// InsertDbgValueIntrinsic - Insert a new llvm.dbg.value intrinsic call.
-Instruction *DIFactory::InsertDbgValueIntrinsic(Value *V, uint64_t Offset,
- DIVariable D,
- BasicBlock *InsertAtEnd) {
- assert(V && "no value passed to dbg.value");
- assert(D.Verify() && "invalid DIVariable passed to dbg.value");
- if (!ValueFn)
- ValueFn = Intrinsic::getDeclaration(&M, Intrinsic::dbg_value);
-
- Value *Args[] = { MDNode::get(V->getContext(), &V, 1),
- ConstantInt::get(Type::getInt64Ty(V->getContext()), Offset),
- D };
- return CallInst::Create(ValueFn, Args, Args+3, "", InsertAtEnd);
-}
-
-// RecordType - Record DIType in a module such that it is not lost even if
-// it is not referenced through debug info anchors.
-void DIFactory::RecordType(DIType T) {
- NamedMDNode *NMD = M.getOrInsertNamedMetadata("llvm.dbg.ty");
- NMD->addOperand(T);
-}
Removed: dragonegg/trunk/llvm-debug.h
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/llvm-debug.h?rev=127407&view=auto
==============================================================================
--- dragonegg/trunk/llvm-debug.h (original)
+++ dragonegg/trunk/llvm-debug.h (removed)
@@ -1,367 +0,0 @@
-//===---- llvm-debug.h - Interface for generating debug info ----*- C++ -*-===//
-//
-// Copyright (C) 2006, 2007, 2008, 2009, 2010, 2011 Jim Laskey, Duncan Sands
-// et al.
-//
-// This file is part of DragonEgg.
-//
-// DragonEgg is free software; you can redistribute it and/or modify it under
-// the terms of the GNU General Public License as published by the Free Software
-// Foundation; either version 2, or (at your option) any later version.
-//
-// DragonEgg is distributed in the hope that it will be useful, but WITHOUT ANY
-// WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
-// A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-//
-// You should have received a copy of the GNU General Public License along with
-// DragonEgg; see the file COPYING. If not, write to the Free Software
-// Foundation, 51 Franklin Street, Suite 500, Boston, MA 02110-1335, USA.
-//
-//===----------------------------------------------------------------------===//
-// This file declares the debug interfaces shared among the dragonegg files.
-//===----------------------------------------------------------------------===//
-
-#ifndef LLVM_DEBUG_H
-#define LLVM_DEBUG_H
-
-// Plugin headers
-#include "llvm-internal.h"
-
-// LLVM headers
-#include "llvm/Analysis/DebugInfo.h"
-#include "llvm/Support/Allocator.h"
-#include "llvm/Support/ValueHandle.h"
-
-// System headers
-#include <map>
-
-namespace llvm {
-
-// Forward declarations
-class AllocaInst;
-class BasicBlock;
-class CallInst;
-class Function;
-class Module;
-
-/// DIFactory - This object assists with the construction of the various
-/// descriptors.
-class DIFactory {
- Module &M;
- LLVMContext& VMContext;
-
- Function *DeclareFn; // llvm.dbg.declare
- Function *ValueFn; // llvm.dbg.value
-
- DIFactory(const DIFactory &); // DO NOT IMPLEMENT
- void operator=(const DIFactory&); // DO NOT IMPLEMENT
- public:
- enum ComplexAddrKind { OpPlus=1, OpDeref };
-
- explicit DIFactory(Module &m);
-
- /// GetOrCreateArray - Create an descriptor for an array of descriptors.
- /// This implicitly uniques the arrays created.
- DIArray GetOrCreateArray(DIDescriptor *Tys, unsigned NumTys);
-
- /// GetOrCreateSubrange - Create a descriptor for a value range. This
- /// implicitly uniques the values returned.
- DISubrange GetOrCreateSubrange(int64_t Lo, int64_t Hi);
-
- /// CreateUnspecifiedParameter - Create unspeicified type descriptor
- /// for a subroutine type.
- DIDescriptor CreateUnspecifiedParameter();
-
- /// CreateCompileUnit - Create a new descriptor for the specified compile
- /// unit.
- DICompileUnit CreateCompileUnit(unsigned LangID,
- StringRef Filename,
- StringRef Directory,
- StringRef Producer,
- bool isMain = false,
- bool isOptimized = false,
- StringRef Flags = "",
- unsigned RunTimeVer = 0);
-
- /// CreateFile - Create a new descriptor for the specified file.
- DIFile CreateFile(StringRef Filename, StringRef Directory,
- DICompileUnit CU);
-
- /// CreateEnumerator - Create a single enumerator value.
- DIEnumerator CreateEnumerator(StringRef Name, uint64_t Val);
-
- /// CreateBasicType - Create a basic type like int, float, etc.
- DIBasicType CreateBasicType(DIDescriptor Context, StringRef Name,
- DIFile F, unsigned LineNumber,
- uint64_t SizeInBits, uint64_t AlignInBits,
- uint64_t OffsetInBits, unsigned Flags,
- unsigned Encoding);
-
- /// CreateBasicType - Create a basic type like int, float, etc.
- DIBasicType CreateBasicTypeEx(DIDescriptor Context, StringRef Name,
- DIFile F, unsigned LineNumber,
- Constant *SizeInBits, Constant *AlignInBits,
- Constant *OffsetInBits, unsigned Flags,
- unsigned Encoding);
-
- /// CreateDerivedType - Create a derived type like const qualified type,
- /// pointer, typedef, etc.
- DIDerivedType CreateDerivedType(unsigned Tag, DIDescriptor Context,
- StringRef Name,
- DIFile F,
- unsigned LineNumber,
- uint64_t SizeInBits, uint64_t AlignInBits,
- uint64_t OffsetInBits, unsigned Flags,
- DIType DerivedFrom);
-
- /// CreateDerivedType - Create a derived type like const qualified type,
- /// pointer, typedef, etc.
- DIDerivedType CreateDerivedTypeEx(unsigned Tag, DIDescriptor Context,
- StringRef Name,
- DIFile F,
- unsigned LineNumber,
- Constant *SizeInBits,
- Constant *AlignInBits,
- Constant *OffsetInBits, unsigned Flags,
- DIType DerivedFrom);
-
- /// CreateCompositeType - Create a composite type like array, struct, etc.
- DICompositeType CreateCompositeType(unsigned Tag, DIDescriptor Context,
- StringRef Name,
- DIFile F,
- unsigned LineNumber,
- uint64_t SizeInBits,
- uint64_t AlignInBits,
- uint64_t OffsetInBits, unsigned Flags,
- DIType DerivedFrom,
- DIArray Elements,
- unsigned RunTimeLang = 0,
- MDNode *ContainingType = 0);
-
- /// CreateTemporaryType - Create a temporary forward-declared type.
- DIType CreateTemporaryType();
- DIType CreateTemporaryType(DIFile F);
-
- /// CreateArtificialType - Create a new DIType with "artificial" flag set.
- DIType CreateArtificialType(DIType Ty);
-
- /// CreateCompositeType - Create a composite type like array, struct, etc.
- DICompositeType CreateCompositeTypeEx(unsigned Tag, DIDescriptor Context,
- StringRef Name,
- DIFile F,
- unsigned LineNumber,
- Constant *SizeInBits,
- Constant *AlignInBits,
- Constant *OffsetInBits,
- unsigned Flags,
- DIType DerivedFrom,
- DIArray Elements,
- unsigned RunTimeLang = 0,
- MDNode *ContainingType = 0);
-
- /// CreateSubprogram - Create a new descriptor for the specified subprogram.
- /// See comments in DISubprogram for descriptions of these fields.
- DISubprogram CreateSubprogram(DIDescriptor Context, StringRef Name,
- StringRef DisplayName,
- StringRef LinkageName,
- DIFile F, unsigned LineNo,
- DIType Ty, bool isLocalToUnit,
- bool isDefinition,
- unsigned VK = 0,
- unsigned VIndex = 0,
- DIType ContainingType = DIType(),
- unsigned Flags = 0,
- bool isOptimized = false,
- Function *Fn = 0);
-
- /// CreateSubprogramDefinition - Create new subprogram descriptor for the
- /// given declaration.
- DISubprogram CreateSubprogramDefinition(DISubprogram &SPDeclaration);
-
- /// CreateGlobalVariable - Create a new descriptor for the specified global.
- DIGlobalVariable
- CreateGlobalVariable(DIDescriptor Context, StringRef Name,
- StringRef DisplayName,
- StringRef LinkageName,
- DIFile F,
- unsigned LineNo, DIType Ty, bool isLocalToUnit,
- bool isDefinition, llvm::GlobalVariable *GV);
-
- /// CreateGlobalVariable - Create a new descriptor for the specified constant.
- DIGlobalVariable
- CreateGlobalVariable(DIDescriptor Context, StringRef Name,
- StringRef DisplayName,
- StringRef LinkageName,
- DIFile F,
- unsigned LineNo, DIType Ty, bool isLocalToUnit,
- bool isDefinition, llvm::Constant *C);
-
- /// CreateVariable - Create a new descriptor for the specified variable.
- DIVariable CreateVariable(unsigned Tag, DIDescriptor Context,
- StringRef Name,
- DIFile F, unsigned LineNo,
- DIType Ty, bool AlwaysPreserve = false,
- unsigned Flags = 0);
-
- /// CreateComplexVariable - Create a new descriptor for the specified
- /// variable which has a complex address expression for its address.
- DIVariable CreateComplexVariable(unsigned Tag, DIDescriptor Context,
- StringRef Name, DIFile F, unsigned LineNo,
- DIType Ty, Value *const *Addr,
- unsigned NumAddr);
-
- /// CreateLexicalBlock - This creates a descriptor for a lexical block
- /// with the specified parent context.
- DILexicalBlock CreateLexicalBlock(DIDescriptor Context, DIFile F,
- unsigned Line = 0, unsigned Col = 0);
-
- /// CreateNameSpace - This creates new descriptor for a namespace
- /// with the specified parent context.
- DINameSpace CreateNameSpace(DIDescriptor Context, StringRef Name,
- DIFile F, unsigned LineNo);
-
- /// CreateLocation - Creates a debug info location.
- DILocation CreateLocation(unsigned LineNo, unsigned ColumnNo,
- DIScope S, DILocation OrigLoc);
-
- /// CreateLocation - Creates a debug info location.
- DILocation CreateLocation(unsigned LineNo, unsigned ColumnNo,
- DIScope S, MDNode *OrigLoc = 0);
-
- /// InsertDeclare - Insert a new llvm.dbg.declare intrinsic call.
- Instruction *InsertDeclare(llvm::Value *Storage, DIVariable D,
- BasicBlock *InsertAtEnd);
-
- /// InsertDeclare - Insert a new llvm.dbg.declare intrinsic call.
- Instruction *InsertDeclare(llvm::Value *Storage, DIVariable D,
- Instruction *InsertBefore);
-
- /// InsertDbgValueIntrinsic - Insert a new llvm.dbg.value intrinsic call.
- Instruction *InsertDbgValueIntrinsic(llvm::Value *V, uint64_t Offset,
- DIVariable D, BasicBlock *InsertAtEnd);
-
- /// InsertDbgValueIntrinsic - Insert a new llvm.dbg.value intrinsic call.
- Instruction *InsertDbgValueIntrinsic(llvm::Value *V, uint64_t Offset,
- DIVariable D, Instruction *InsertBefore);
-
- // RecordType - Record DIType in a module such that it is not lost even if
- // it is not referenced through debug info anchors.
- void RecordType(DIType T);
-
- private:
- Constant *GetTagConstant(unsigned TAG);
-};
-
-/// DebugInfo - This class gathers all debug information during compilation and
-/// is responsible for emitting to llvm globals or pass directly to the backend.
-class DebugInfo {
-private:
- Module *M; // The current module.
- DIFactory DebugFactory;
- const char *CurFullPath; // Previous location file encountered.
- int CurLineNo; // Previous location line# encountered.
- const char *PrevFullPath; // Previous location file encountered.
- int PrevLineNo; // Previous location line# encountered.
- BasicBlock *PrevBB; // Last basic block encountered.
-
- DICompileUnit TheCU; // The compile unit.
-
- std::map<tree_node *, WeakVH > TypeCache;
- // Cache of previously constructed
- // Types.
- std::map<tree_node *, WeakVH > SPCache;
- // Cache of previously constructed
- // Subprograms.
- std::map<tree_node *, WeakVH> NameSpaceCache;
- // Cache of previously constructed name
- // spaces.
-
- SmallVector<WeakVH, 4> RegionStack;
- // Stack to track declarative scopes.
-
- std::map<tree_node *, WeakVH> RegionMap;
-
- /// FunctionNames - This is a storage for function names that are
- /// constructed on demand. For example, C++ destructors, C++ operators etc..
- llvm::BumpPtrAllocator FunctionNames;
-
-public:
- DebugInfo(Module *m);
-
- /// Initialize - Initialize debug info by creating compile unit for
- /// main_input_filename. This must be invoked after language dependent
- /// initialization is done.
- void Initialize();
-
- // Accessors.
- void setLocationFile(const char *FullPath) { CurFullPath = FullPath; }
- void setLocationLine(int LineNo) { CurLineNo = LineNo; }
-
- /// EmitFunctionStart - Constructs the debug code for entering a function -
- /// "llvm.dbg.func.start."
- void EmitFunctionStart(tree_node *FnDecl, Function *Fn);
-
- /// EmitFunctionEnd - Constructs the debug code for exiting a declarative
- /// region - "llvm.dbg.region.end."
- void EmitFunctionEnd(bool EndFunction);
-
- /// EmitDeclare - Constructs the debug code for allocation of a new variable.
- /// region - "llvm.dbg.declare."
- void EmitDeclare(tree_node *decl, unsigned Tag, const char *Name,
- tree_node *type, Value *AI, LLVMBuilder &Builder);
-
- /// EmitStopPoint - Emit a call to llvm.dbg.stoppoint to indicate a change of
- /// source line.
- void EmitStopPoint(BasicBlock *CurBB, LLVMBuilder &Builder);
-
- /// EmitGlobalVariable - Emit information about a global variable.
- ///
- void EmitGlobalVariable(GlobalVariable *GV, tree_node *decl);
-
- /// getOrCreateType - Get the type from the cache or create a new type if
- /// necessary.
- DIType getOrCreateType(tree_node *type);
-
- /// createBasicType - Create BasicType.
- DIType createBasicType(tree_node *type);
-
- /// createMethodType - Create MethodType.
- DIType createMethodType(tree_node *type);
-
- /// createPointerType - Create PointerType.
- DIType createPointerType(tree_node *type);
-
- /// createArrayType - Create ArrayType.
- DIType createArrayType(tree_node *type);
-
- /// createEnumType - Create EnumType.
- DIType createEnumType(tree_node *type);
-
- /// createStructType - Create StructType for struct or union or class.
- DIType createStructType(tree_node *type);
-
- /// createVarinatType - Create variant type or return MainTy.
- DIType createVariantType(tree_node *type, DIType MainTy);
-
- /// getOrCreateCompileUnit - Create a new compile unit.
- DICompileUnit getOrCreateCompileUnit(const char *FullPath,
- bool isMain = false);
-
- /// getOrCreateFile - Get DIFile descriptor.
- DIFile getOrCreateFile(const char *FullPath);
-
- /// findRegion - Find tree_node N's region.
- DIDescriptor findRegion(tree_node *n);
-
- /// getOrCreateNameSpace - Get name space descriptor for the tree node.
- DINameSpace getOrCreateNameSpace(tree_node *Node, DIDescriptor Context);
-
- /// getFunctionName - Get function name for the given FnDecl. If the
- /// name is constructred on demand (e.g. C++ destructor) then the name
- /// is stored on the side.
- StringRef getFunctionName(tree_node *FnDecl);
-};
-
-} // end namespace llvm
-
-#endif /* LLVM_DEBUG_H */
Removed: dragonegg/trunk/llvm-internal.h
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/llvm-internal.h?rev=127407&view=auto
==============================================================================
--- dragonegg/trunk/llvm-internal.h (original)
+++ dragonegg/trunk/llvm-internal.h (removed)
@@ -1,860 +0,0 @@
-//=-- llvm-internal.h - Interface between the backend components --*- C++ -*-=//
-//
-// Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 Chris Lattner,
-// Duncan Sands et al.
-//
-// This file is part of DragonEgg.
-//
-// DragonEgg is free software; you can redistribute it and/or modify it under
-// the terms of the GNU General Public License as published by the Free Software
-// Foundation; either version 2, or (at your option) any later version.
-//
-// DragonEgg is distributed in the hope that it will be useful, but WITHOUT ANY
-// WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
-// A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-//
-// You should have received a copy of the GNU General Public License along with
-// DragonEgg; see the file COPYING. If not, write to the Free Software
-// Foundation, 51 Franklin Street, Suite 500, Boston, MA 02110-1335, USA.
-//
-//===----------------------------------------------------------------------===//
-// This file declares the internal interfaces shared among the dragonegg files.
-//===----------------------------------------------------------------------===//
-
-#ifndef LLVM_INTERNAL_H
-#define LLVM_INTERNAL_H
-
-// LLVM headers
-#include "llvm/Intrinsics.h"
-#include "llvm/ADT/DenseMap.h"
-#include "llvm/ADT/SetVector.h"
-#include "llvm/Support/IRBuilder.h"
-#include "llvm/Support/TargetFolder.h"
-
-struct basic_block_def;
-union gimple_statement_d;
-union tree_node;
-
-namespace llvm {
- class Module;
- class GlobalVariable;
- class Function;
- class GlobalValue;
- class BasicBlock;
- class Instruction;
- class AllocaInst;
- class BranchInst;
- class Value;
- class Constant;
- class ConstantInt;
- class Type;
- class FunctionType;
- class TargetMachine;
- class TargetData;
- class DebugInfo;
- template<typename> class AssertingVH;
- template<typename> class TrackingVH;
-}
-using namespace llvm;
-
-typedef IRBuilder<true, TargetFolder> LLVMBuilder;
-
-// Global state.
-
-/// TheModule - This is the current global module that we are compiling into.
-///
-extern llvm::Module *TheModule;
-
-/// TheDebugInfo - This object is responsible for gather all debug information.
-/// If it's value is NULL then no debug information should be gathered.
-extern llvm::DebugInfo *TheDebugInfo;
-
-/// TheTarget - The current target being compiled for.
-///
-extern llvm::TargetMachine *TheTarget;
-
-/// TheFolder - The constant folder to use.
-extern TargetFolder *TheFolder;
-
-/// getTargetData - Return the current TargetData object from TheTarget.
-const TargetData &getTargetData();
-
-/// flag_default_initialize_globals - Whether global variables with no explicit
-/// initial value should be zero initialized.
-extern bool flag_default_initialize_globals;
-
-/// flag_odr - Whether the language being compiled obeys the One Definition Rule
-/// (i.e. if the same function is defined in multiple compilation units, all the
-/// definitions are equivalent).
-extern bool flag_odr;
-
-/// flag_vararg_requires_arguments - Do not consider functions with no arguments
-/// to take a variable number of arguments (...). If set then a function like
-/// "T foo() {}" will be treated like "T foo(void) {}" and not "T foo(...) {}".
-extern bool flag_vararg_requires_arguments;
-
-/// flag_force_vararg_prototypes - Force prototypes to take a variable number of
-/// arguments (...). This is helpful if the language front-end sometimes emits
-/// calls where the call arguments do not match the callee function declaration.
-extern bool flag_force_vararg_prototypes;
-
-/// AttributeUsedGlobals - The list of globals that are marked attribute(used).
-extern SmallSetVector<Constant *,32> AttributeUsedGlobals;
-
-extern Constant* ConvertMetadataStringToGV(const char *str);
-
-/// AddAnnotateAttrsToGlobal - Adds decls that have a
-/// annotate attribute to a vector to be emitted later.
-extern void AddAnnotateAttrsToGlobal(GlobalValue *GV, tree_node *decl);
-
-// Mapping between GCC declarations and LLVM values. The GCC declaration must
-// satisfy HAS_RTL_P.
-
-/// DECL_LLVM - Returns the LLVM declaration of a global variable or function.
-extern Value *make_decl_llvm(tree_node *);
-#define DECL_LLVM(NODE) make_decl_llvm(NODE)
-
-/// SET_DECL_LLVM - Set the DECL_LLVM for NODE to LLVM.
-extern Value *set_decl_llvm(tree_node *, Value *);
-#define SET_DECL_LLVM(NODE, LLVM) set_decl_llvm(NODE, LLVM)
-
-/// DECL_LLVM_IF_SET - The DECL_LLVM for NODE, if it is set, or NULL, if it is
-/// not set.
-extern Value *get_decl_llvm(tree_node *);
-#define DECL_LLVM_IF_SET(NODE) (HAS_RTL_P(NODE) ? get_decl_llvm(NODE) : NULL)
-
-/// DECL_LLVM_SET_P - Returns nonzero if the DECL_LLVM for NODE has already
-/// been set.
-#define DECL_LLVM_SET_P(NODE) (DECL_LLVM_IF_SET(NODE) != NULL)
-
-/// DEFINITION_LLVM - Ensures that the body or initial value of the given GCC
-/// global will be output, and returns a declaration for it.
-Value *make_definition_llvm(tree_node *decl);
-#define DEFINITION_LLVM(NODE) make_definition_llvm(NODE)
-
-// Mapping between GCC declarations and non-negative integers. The GCC
-// declaration must not satisfy HAS_RTL_P.
-
-/// set_decl_index - Associate a non-negative number with the given GCC
-/// declaration.
-int set_decl_index(tree_node *, int);
-
-/// get_decl_index - Get the non-negative number associated with the given GCC
-/// declaration. Returns a negative value if no such association has been made.
-int get_decl_index(tree_node *);
-
-void changeLLVMConstant(Constant *Old, Constant *New);
-void register_ctor_dtor(Function *, int, bool);
-void readLLVMTypesStringTable();
-void writeLLVMTypesStringTable();
-void readLLVMValues();
-void writeLLVMValues();
-void clearTargetBuiltinCache();
-const char *extractRegisterName(tree_node *);
-void handleVisibility(tree_node *decl, GlobalValue *GV);
-Twine getLLVMAssemblerName(tree_node *);
-
-struct StructTypeConversionInfo;
-
-/// Return true if and only if field no. N from struct type T is a padding
-/// element added to match llvm struct type size and gcc struct type size.
-bool isPaddingElement(tree_node*, unsigned N);
-
-/// TypeConverter - Implement the converter from GCC types to LLVM types.
-///
-class TypeConverter {
- /// ConvertingStruct - If we are converting a RECORD or UNION to an LLVM type
- /// we set this flag to true.
- bool ConvertingStruct;
-
- /// PointersToReresolve - When ConvertingStruct is true, we handling of
- /// POINTER_TYPE and REFERENCE_TYPE is changed to return
- /// opaque*'s instead of recursively calling ConvertType. When this happens,
- /// we add the POINTER_TYPE to this list.
- ///
- std::vector<tree_node*> PointersToReresolve;
-public:
- TypeConverter() : ConvertingStruct(false) {}
-
- /// ConvertType - Returns the LLVM type to use for memory that holds a value
- /// of the given GCC type (GetRegType should be used for values in registers).
- const Type *ConvertType(tree_node *type);
-
- /// GCCTypeOverlapsWithLLVMTypePadding - Return true if the specified GCC type
- /// has any data that overlaps with structure padding in the specified LLVM
- /// type.
- static bool GCCTypeOverlapsWithLLVMTypePadding(tree_node *t, const Type *Ty);
-
-
- /// ConvertFunctionType - Convert the specified FUNCTION_TYPE or METHOD_TYPE
- /// tree to an LLVM type. This does the same thing that ConvertType does, but
- /// it also returns the function's LLVM calling convention and attributes.
- const FunctionType *ConvertFunctionType(tree_node *type,
- tree_node *decl,
- tree_node *static_chain,
- CallingConv::ID &CallingConv,
- AttrListPtr &PAL);
-
- /// ConvertArgListToFnType - Given a DECL_ARGUMENTS list on an GCC tree,
- /// return the LLVM type corresponding to the function. This is useful for
- /// turning "T foo(...)" functions into "T foo(void)" functions.
- const FunctionType *ConvertArgListToFnType(tree_node *type,
- tree_node *arglist,
- tree_node *static_chain,
- CallingConv::ID &CallingConv,
- AttrListPtr &PAL);
-
-private:
- const Type *ConvertRECORD(tree_node *type);
- bool DecodeStructFields(tree_node *Field, StructTypeConversionInfo &Info);
- void DecodeStructBitField(tree_node *Field, StructTypeConversionInfo &Info);
- void SelectUnionMember(tree_node *type, StructTypeConversionInfo &Info);
-};
-
-extern TypeConverter *TheTypeConverter;
-
-/// ConvertType - Returns the LLVM type to use for memory that holds a value
-/// of the given GCC type (GetRegType should be used for values in registers).
-inline const Type *ConvertType(tree_node *type) {
- return TheTypeConverter->ConvertType(type);
-}
-
-/// getDefaultValue - Return the default value to use for a constant or global
-/// that has no value specified. For example in C like languages such variables
-/// are initialized to zero, while in Ada they hold an undefined value.
-inline Constant *getDefaultValue(const Type *Ty) {
- return flag_default_initialize_globals ?
- Constant::getNullValue(Ty) : UndefValue::get(Ty);
-}
-
-/// GetUnitPointerType - Returns an LLVM pointer type which points to memory one
-/// address unit wide. For example, on a machine which has 16 bit bytes returns
-/// an i16*.
-extern const Type *GetUnitPointerType(LLVMContext &C, unsigned AddrSpace = 0);
-
-/// GetFieldIndex - Return the index of the field in the given LLVM type that
-/// corresponds to the GCC field declaration 'decl'. This means that the LLVM
-/// and GCC fields start in the same byte (if 'decl' is a bitfield, this means
-/// that its first bit is within the byte the LLVM field starts at). Returns
-/// INT_MAX if there is no such LLVM field.
-int GetFieldIndex(tree_node *decl, const Type *Ty);
-
-/// getINTEGER_CSTVal - Return the specified INTEGER_CST value as a uint64_t.
-///
-uint64_t getINTEGER_CSTVal(tree_node *exp);
-
-/// isInt64 - Return true if t is an INTEGER_CST that fits in a 64 bit integer.
-/// If Unsigned is false, returns whether it fits in a int64_t. If Unsigned is
-/// true, returns whether the value is non-negative and fits in a uint64_t.
-/// Always returns false for overflowed constants or if t is NULL.
-bool isInt64(tree_node *t, bool Unsigned);
-
-/// getInt64 - Extract the value of an INTEGER_CST as a 64 bit integer. If
-/// Unsigned is false, the value must fit in a int64_t. If Unsigned is true,
-/// the value must be non-negative and fit in a uint64_t. Must not be used on
-/// overflowed constants. These conditions can be checked by calling isInt64.
-uint64_t getInt64(tree_node *t, bool Unsigned);
-
-/// isPassedByInvisibleReference - Return true if the specified type should be
-/// passed by 'invisible reference'. In other words, instead of passing the
-/// thing by value, pass the address of a temporary.
-bool isPassedByInvisibleReference(tree_node *type);
-
-/// isSequentialCompatible - Return true if the specified gcc array or pointer
-/// type and the corresponding LLVM SequentialType lay out their components
-/// identically in memory, so doing a GEP accesses the right memory location.
-/// We assume that objects without a known size do not.
-extern bool isSequentialCompatible(tree_node *type);
-
-/// OffsetIsLLVMCompatible - Return true if the given field is offset from the
-/// start of the record by a constant amount which is not humongously big.
-extern bool OffsetIsLLVMCompatible(tree_node *field_decl);
-
-/// ArrayLengthOf - Returns the length of the given gcc array type, or ~0ULL if
-/// the array has variable or unknown length.
-extern uint64_t ArrayLengthOf(tree_node *type);
-
-/// isBitfield - Returns whether to treat the specified field as a bitfield.
-bool isBitfield(tree_node *field_decl);
-
-/// getFieldOffsetInBits - Return the bit offset of a FIELD_DECL in a structure.
-extern uint64_t getFieldOffsetInBits(tree_node *field);
-
-/// ValidateRegisterVariable - Check that a static "asm" variable is
-/// well-formed. If not, emit error messages and return true. If so, return
-/// false.
-bool ValidateRegisterVariable(tree_node *decl);
-
-/// MemRef - This struct holds the information needed for a memory access:
-/// a pointer to the memory, its alignment and whether the access is volatile.
-class MemRef {
-public:
- Value *Ptr;
- bool Volatile;
-private:
- unsigned char LogAlign;
-public:
- explicit MemRef() : Ptr(0), Volatile(false), LogAlign(0) {}
- explicit MemRef(Value *P, uint32_t A, bool V) : Ptr(P), Volatile(V) {
- // Forbid alignment 0 along with non-power-of-2 alignment values.
- assert(isPowerOf2_32(A) && "Alignment not a power of 2!");
- LogAlign = Log2_32(A);
- }
-
- uint32_t getAlignment() const {
- return 1U << LogAlign;
- }
-
- void setAlignment(uint32_t A) {
- LogAlign = Log2_32(A);
- }
-};
-
-/// LValue - This struct represents an lvalue in the program. In particular,
-/// the Ptr member indicates the memory that the lvalue lives in. Alignment
-/// is the alignment of the memory (in bytes).If this is a bitfield reference,
-/// BitStart indicates the first bit in the memory that is part of the field
-/// and BitSize indicates the extent.
-///
-/// "LValue" is intended to be a light-weight object passed around by-value.
-class LValue : public MemRef {
-public:
- unsigned char BitStart;
- unsigned char BitSize;
-public:
- explicit LValue() : BitStart(255), BitSize(255) {}
- explicit LValue(MemRef &M) : MemRef(M), BitStart(255), BitSize(255) {}
- LValue(Value *P, uint32_t A, bool V = false) :
- MemRef(P, A, V), BitStart(255), BitSize(255) {}
- LValue(Value *P, uint32_t A, unsigned BSt, unsigned BSi, bool V = false) :
- MemRef(P, A, V), BitStart(BSt), BitSize(BSi) {
- assert(BitStart == BSt && BitSize == BSi &&
- "Bit values larger than 256?");
- }
-
- bool isBitfield() const { return BitStart != 255; }
-};
-
-/// PhiRecord - This struct holds the LLVM PHI node associated with a GCC phi.
-struct PhiRecord {
- gimple_statement_d *gcc_phi;
- PHINode *PHI;
-};
-
-/// TreeToLLVM - An instance of this class is created and used to convert the
-/// body of each function to LLVM.
-///
-class TreeToLLVM {
- // State that is initialized when the function starts.
- const TargetData &TD;
- tree_node *FnDecl;
- Function *Fn;
- BasicBlock *ReturnBB;
- unsigned ReturnOffset;
-
- // State that changes as the function is emitted.
-
- /// Builder - Instruction creator, the location to insert into is always the
- /// same as &Fn->back().
- LLVMBuilder Builder;
-
- // AllocaInsertionPoint - Place to insert alloca instructions. Lazily created
- // and managed by CreateTemporary.
- Instruction *AllocaInsertionPoint;
-
- // SSAInsertionPoint - Place to insert reads corresponding to SSA default
- // definitions.
- Instruction *SSAInsertionPoint;
-
- /// BasicBlocks - Map from GCC to LLVM basic blocks.
- DenseMap<basic_block_def *, BasicBlock*> BasicBlocks;
-
- /// LocalDecls - Map from local declarations to their associated LLVM values.
- DenseMap<tree_node *, AssertingVH<Value> > LocalDecls;
-
- /// PendingPhis - Phi nodes which have not yet been populated with operands.
- SmallVector<PhiRecord, 16> PendingPhis;
-
- // SSANames - Map from GCC ssa names to the defining LLVM value.
- DenseMap<tree_node *, TrackingVH<Value> > SSANames;
-
-public:
-
- //===---------------------- Local Declarations --------------------------===//
-
- /// DECL_LOCAL - Like DECL_LLVM, returns the LLVM declaration of a variable or
- /// function. However DECL_LOCAL can be used with declarations local to the
- /// current function as well as with global declarations.
- Value *make_decl_local(tree_node *);
- #define DECL_LOCAL(NODE) make_decl_local(NODE)
-
- /// DEFINITION_LOCAL - Like DEFINITION_LLVM, ensures that the initial value or
- /// body of a variable or function will be output. However DEFINITION_LOCAL
- /// can be used with declarations local to the current function as well as
- /// with global declarations.
- Value *make_definition_local(tree_node *);
- #define DEFINITION_LOCAL(NODE) make_definition_local(NODE)
-
- /// SET_DECL_LOCAL - Set the DECL_LOCAL for NODE to LLVM.
- Value *set_decl_local(tree_node *, Value *);
- #define SET_DECL_LOCAL(NODE, LLVM) set_decl_local(NODE, LLVM)
-
- /// DECL_LOCAL_IF_SET - The DECL_LOCAL for NODE, if it is set, or NULL, if it
- /// is not set.
- Value *get_decl_local(tree_node *);
- #define DECL_LOCAL_IF_SET(NODE) (HAS_RTL_P(NODE) ? get_decl_local(NODE) : NULL)
-
- /// DECL_LOCAL_SET_P - Returns nonzero if the DECL_LOCAL for NODE has already
- /// been set.
- #define DECL_LOCAL_SET_P(NODE) (DECL_LOCAL_IF_SET(NODE) != NULL)
-
-
-private:
-
- //===---------------------- Exception Handling --------------------------===//
-
- /// NormalInvokes - Mapping from landing pad number to the set of invoke
- /// instructions that unwind to that landing pad.
- SmallVector<SmallVector<InvokeInst *, 8>, 16> NormalInvokes;
-
- /// ExceptionPtrs - Mapping from EH region index to the local holding the
- /// exception pointer for that region.
- SmallVector<AllocaInst *, 16> ExceptionPtrs;
-
- /// ExceptionFilters - Mapping from EH region index to the local holding the
- /// filter value for that region.
- SmallVector<AllocaInst *, 16> ExceptionFilters;
-
- /// FailureBlocks - Mapping from the index of a must-not-throw EH region to
- /// the block containing the failure code for the region (the code that is
- /// run if an exception is thrown in this region).
- SmallVector<BasicBlock *, 16> FailureBlocks;
-
- /// RewindBB - Block containing code that continues unwinding an exception.
- BasicBlock *RewindBB;
-
- /// RewindTmp - Local holding the exception to continue unwinding.
- AllocaInst *RewindTmp;
-
-public:
- TreeToLLVM(tree_node *fndecl);
- ~TreeToLLVM();
-
- /// getFUNCTION_DECL - Return the FUNCTION_DECL node for the current function
- /// being compiled.
- tree_node *getFUNCTION_DECL() const { return FnDecl; }
-
- /// EmitFunction - Convert 'fndecl' to LLVM code.
- Function *EmitFunction();
-
- /// EmitBasicBlock - Convert the given basic block.
- void EmitBasicBlock(basic_block_def *bb);
-
- /// EmitLV - Convert the specified l-value tree node to LLVM code, returning
- /// the address of the result.
- LValue EmitLV(tree_node *exp);
-
- void TODO(tree_node *exp = 0);
-
- /// CastToAnyType - Cast the specified value to the specified type regardless
- /// of the types involved. This is an inferred cast.
- Value *CastToAnyType (Value *V, bool VSigned, const Type *Ty, bool TySigned);
-
- /// CastToUIntType - Cast the specified value to the specified type assuming
- /// that V's type and Ty are integral types. This arbitrates between BitCast,
- /// Trunc and ZExt.
- Value *CastToUIntType(Value *V, const Type *Ty);
-
- /// CastToSIntType - Cast the specified value to the specified type assuming
- /// that V's type and Ty are integral types. This arbitrates between BitCast,
- /// Trunc and SExt.
- Value *CastToSIntType(Value *V, const Type *Ty);
-
- /// CastToFPType - Cast the specified value to the specified type assuming
- /// that V's type and Ty are floating point types. This arbitrates between
- /// BitCast, FPTrunc and FPExt.
- Value *CastToFPType(Value *V, const Type *Ty);
-
- /// CreateAnyAdd - Add two LLVM scalar values with the given GCC type. Does
- /// not support complex numbers. The type is used to set overflow flags.
- Value *CreateAnyAdd(Value *LHS, Value *RHS, tree_node *type);
-
- /// CreateAnyMul - Multiply two LLVM scalar values with the given GCC type.
- /// Does not support complex numbers. The type is used to set overflow flags.
- Value *CreateAnyMul(Value *LHS, Value *RHS, tree_node *type);
-
- /// CreateAnyNeg - Negate an LLVM scalar value with the given GCC type. Does
- /// not support complex numbers. The type is used to set overflow flags.
- Value *CreateAnyNeg(Value *V, tree_node *type);
-
- /// CreateAnySub - Subtract two LLVM scalar values with the given GCC type.
- /// Does not support complex numbers.
- Value *CreateAnySub(Value *LHS, Value *RHS, tree_node *type);
-
- /// CreateTemporary - Create a new alloca instruction of the specified type,
- /// inserting it into the entry block and returning it. The resulting
- /// instruction's type is a pointer to the specified type.
- AllocaInst *CreateTemporary(const Type *Ty, unsigned align=0);
-
- /// CreateTempLoc - Like CreateTemporary, but returns a MemRef.
- MemRef CreateTempLoc(const Type *Ty);
-
- /// EmitAggregateCopy - Copy the elements from SrcLoc to DestLoc, using the
- /// GCC type specified by GCCType to know which elements to copy.
- void EmitAggregateCopy(MemRef DestLoc, MemRef SrcLoc, tree_node *GCCType);
-
- /// EmitAggregate - Store the specified tree node into the location given by
- /// DestLoc.
- void EmitAggregate(tree_node *exp, const MemRef &DestLoc);
-
-private: // Helper functions.
-
- /// StartFunctionBody - Start the emission of 'fndecl', outputing all
- /// declarations for parameters and setting things up.
- void StartFunctionBody();
-
- /// FinishFunctionBody - Once the body of the function has been emitted, this
- /// cleans up and returns the result function.
- Function *FinishFunctionBody();
-
- /// PopulatePhiNodes - Populate generated phi nodes with their operands.
- void PopulatePhiNodes();
-
- /// getBasicBlock - Find or create the LLVM basic block corresponding to BB.
- BasicBlock *getBasicBlock(basic_block_def *bb);
-
- /// getLabelDeclBlock - Lazily get and create a basic block for the specified
- /// label.
- BasicBlock *getLabelDeclBlock(tree_node *LabelDecl);
-
- /// DefineSSAName - Use the given value as the definition of the given SSA
- /// name. Returns the provided value as a convenience.
- Value *DefineSSAName(tree_node *reg, Value *Val);
-
- /// BeginBlock - Add the specified basic block to the end of the function. If
- /// the previous block falls through into it, add an explicit branch.
- void BeginBlock(BasicBlock *BB);
-
- /// EmitAggregateZero - Zero the elements of DestLoc.
- void EmitAggregateZero(MemRef DestLoc, tree_node *GCCType);
-
- /// EmitMemCpy/EmitMemMove/EmitMemSet - Emit an llvm.memcpy/llvm.memmove or
- /// llvm.memset call with the specified operands. Returns DestPtr bitcast
- /// to i8*.
- Value *EmitMemCpy(Value *DestPtr, Value *SrcPtr, Value *Size, unsigned Align);
- Value *EmitMemMove(Value *DestPtr, Value *SrcPtr, Value *Size, unsigned Align);
- Value *EmitMemSet(Value *DestPtr, Value *SrcVal, Value *Size, unsigned Align);
-
- /// EmitLandingPads - Emit EH landing pads.
- void EmitLandingPads();
-
- /// EmitFailureBlocks - Emit the blocks containing failure code executed when
- /// an exception is thrown in a must-not-throw region.
- void EmitFailureBlocks();
-
- /// EmitRewindBlock - Emit the block containing code to continue unwinding an
- /// exception.
- void EmitRewindBlock();
-
- /// EmitDebugInfo - Return true if debug info is to be emitted for current
- /// function.
- bool EmitDebugInfo();
-
-private: // Helpers for exception handling.
-
- /// getLandingPad - Return the landing pad for the given exception handling
- /// region, creating it if necessary.
- BasicBlock *getLandingPad(unsigned RegionNo);
-
- /// getExceptionPtr - Return the local holding the exception pointer for the
- /// given exception handling region, creating it if necessary.
- AllocaInst *getExceptionPtr(unsigned RegionNo);
-
- /// getExceptionFilter - Return the local holding the filter value for the
- /// given exception handling region, creating it if necessary.
- AllocaInst *getExceptionFilter(unsigned RegionNo);
-
- /// getFailureBlock - Return the basic block containing the failure code for
- /// the given exception handling region, creating it if necessary.
- BasicBlock *getFailureBlock(unsigned RegionNo);
-
-private:
- void EmitAutomaticVariableDecl(tree_node *decl);
-
- /// EmitAnnotateIntrinsic - Emits call to annotate attr intrinsic
- void EmitAnnotateIntrinsic(Value *V, tree_node *decl);
-
- /// EmitTypeGcroot - Emits call to make type a gcroot
- void EmitTypeGcroot(Value *V);
-
-private:
-
- //===------------------ Render* - Convert GIMPLE to LLVM ----------------===//
-
- void RenderGIMPLE_ASM(gimple_statement_d *stmt);
- void RenderGIMPLE_ASSIGN(gimple_statement_d *stmt);
- void RenderGIMPLE_CALL(gimple_statement_d *stmt);
- void RenderGIMPLE_COND(gimple_statement_d *stmt);
- void RenderGIMPLE_EH_DISPATCH(gimple_statement_d *stmt);
- void RenderGIMPLE_GOTO(gimple_statement_d *stmt);
- void RenderGIMPLE_RESX(gimple_statement_d *stmt);
- void RenderGIMPLE_RETURN(gimple_statement_d *stmt);
- void RenderGIMPLE_SWITCH(gimple_statement_d *stmt);
-
- // Render helpers.
-
- /// EmitAssignRHS - Convert the RHS of a scalar GIMPLE_ASSIGN to LLVM.
- Value *EmitAssignRHS(gimple_statement_d *stmt);
-
- /// EmitAssignSingleRHS - Helper for EmitAssignRHS. Handles those RHS that
- /// are not register expressions.
- Value *EmitAssignSingleRHS(tree_node *rhs);
-
- /// OutputCallRHS - Convert the RHS of a GIMPLE_CALL.
- Value *OutputCallRHS(gimple_statement_d *stmt, const MemRef *DestLoc);
-
- /// WriteScalarToLHS - Store RHS, a non-aggregate value, into the given LHS.
- void WriteScalarToLHS(tree_node *lhs, Value *Scalar);
-
-private:
-
- //===---------- EmitReg* - Convert register expression to LLVM ----------===//
-
- /// GetRegType - Returns the LLVM type to use for registers that hold a value
- /// of the scalar GCC type 'type'. All of the EmitReg* routines use this to
- /// determine the LLVM type to return.
- const Type *GetRegType(tree_node *type);
-
- /// UselesslyTypeConvert - The useless_type_conversion_p predicate implicitly
- /// defines the GCC middle-end type system. For scalar GCC types inner_type
- /// and outer_type, if 'useless_type_conversion_p(outer_type, inner_type)' is
- /// true then the corresponding LLVM inner and outer types (see GetRegType)
- /// are equal except possibly if they are both pointer types (casts to 'void*'
- /// are considered useless for example) or types derived from pointer types
- /// (vector types with pointer element type are the only possibility here).
- /// This method converts LLVM values of the inner type to the outer type.
- Value *UselesslyTypeConvert(Value *V, const Type *Ty) {
- return Builder.CreateBitCast(V, Ty);
- }
-
- /// EmitRegister - Convert the specified gimple register or local constant of
- /// register type to an LLVM value. Only creates code in the entry block.
- Value *EmitRegister(tree_node *reg);
-
- /// EmitReg_SSA_NAME - Return the defining value of the given SSA_NAME.
- /// Only creates code in the entry block.
- Value *EmitReg_SSA_NAME(tree_node *reg);
-
- // Unary expressions.
- Value *EmitReg_ABS_EXPR(tree_node *op);
- Value *EmitReg_BIT_NOT_EXPR(tree_node *op);
- Value *EmitReg_CONJ_EXPR(tree_node *op);
- Value *EmitReg_CONVERT_EXPR(tree_node *type, tree_node *op);
- Value *EmitReg_NEGATE_EXPR(tree_node *op);
- Value *EmitReg_PAREN_EXPR(tree_node *exp);
- Value *EmitReg_TRUTH_NOT_EXPR(tree_node *type, tree_node *op);
-
- // Comparisons.
-
- /// EmitCompare - Compare LHS with RHS using the appropriate comparison code.
- /// The result is an i1 boolean.
- Value *EmitCompare(tree_node *lhs, tree_node *rhs, unsigned code);
-
- // Binary expressions.
- Value *EmitReg_MinMaxExpr(tree_node *type, tree_node *op0, tree_node *op1,
- unsigned UIPred, unsigned SIPred, unsigned Opc,
- bool isMax);
- Value *EmitReg_RotateOp(tree_node *type, tree_node *op0, tree_node *op1,
- unsigned Opc1, unsigned Opc2);
- Value *EmitReg_ShiftOp(tree_node *op0, tree_node *op1, unsigned Opc);
- Value *EmitReg_TruthOp(tree_node *type, tree_node *op0, tree_node *op1,
- unsigned Opc);
- Value *EmitReg_BIT_AND_EXPR(tree_node *op0, tree_node *op1);
- Value *EmitReg_BIT_IOR_EXPR(tree_node *op0, tree_node *op1);
- Value *EmitReg_BIT_XOR_EXPR(tree_node *op0, tree_node *op1);
- Value *EmitReg_CEIL_DIV_EXPR(tree_node *type, tree_node *op0, tree_node *op1);
- Value *EmitReg_COMPLEX_EXPR(tree_node *op0, tree_node *op1);
- Value *EmitReg_FLOOR_DIV_EXPR(tree_node *type, tree_node *op0,
- tree_node *op1);
- Value *EmitReg_FLOOR_MOD_EXPR(tree_node *type, tree_node *op0,
- tree_node *op1);
- Value *EmitReg_MINUS_EXPR(tree_node *op0, tree_node *op1);
- Value *EmitReg_MULT_EXPR(tree_node *op0, tree_node *op1);
- Value *EmitReg_PLUS_EXPR(tree_node *op0, tree_node *op1);
- Value *EmitReg_POINTER_PLUS_EXPR(tree_node *type, tree_node *op0,
- tree_node *op1);
- Value *EmitReg_RDIV_EXPR(tree_node *op0, tree_node *op1);
- Value *EmitReg_ROUND_DIV_EXPR(tree_node *type, tree_node *op0,
- tree_node *op1);
- Value *EmitReg_TRUNC_DIV_EXPR(tree_node *op0, tree_node *op1, bool isExact);
- Value *EmitReg_TRUNC_MOD_EXPR(tree_node *op0, tree_node *op1);
-
- Value *EmitLoadOfLValue(tree_node *exp);
- Value *EmitOBJ_TYPE_REF(tree_node *exp);
- Value *EmitADDR_EXPR(tree_node *exp);
- Value *EmitCallOf(Value *Callee, gimple_statement_d *stmt,
- const MemRef *DestLoc, const AttrListPtr &PAL);
- CallInst *EmitSimpleCall(StringRef CalleeName, tree_node *ret_type,
- /* arguments */ ...) END_WITH_NULL;
- Value *EmitFieldAnnotation(Value *FieldPtr, tree_node *FieldDecl);
-
- // Inline Assembly and Register Variables.
- Value *EmitReadOfRegisterVariable(tree_node *vardecl);
- void EmitModifyOfRegisterVariable(tree_node *vardecl, Value *RHS);
-
- // Helpers for Builtin Function Expansion.
- void EmitMemoryBarrier(bool ll, bool ls, bool sl, bool ss, bool device);
- Value *BuildVector(const std::vector<Value*> &Elts);
- Value *BuildVector(Value *Elt, ...);
- Value *BuildVectorShuffle(Value *InVec1, Value *InVec2, ...);
- Value *BuildBinaryAtomicBuiltin(gimple_statement_d *stmt, Intrinsic::ID id);
- Value *BuildCmpAndSwapAtomicBuiltin(gimple_statement_d *stmt, tree_node *type,
- bool isBool);
-
- // Builtin Function Expansion.
- bool EmitBuiltinCall(gimple_statement_d *stmt, tree_node *fndecl,
- const MemRef *DestLoc, Value *&Result);
- bool EmitFrontendExpandedBuiltinCall(gimple_statement_d *stmt,
- tree_node *fndecl, const MemRef *DestLoc,
- Value *&Result);
- bool EmitBuiltinUnaryOp(Value *InVal, Value *&Result, Intrinsic::ID Id);
- Value *EmitBuiltinSQRT(gimple_statement_d *stmt);
- Value *EmitBuiltinPOWI(gimple_statement_d *stmt);
- Value *EmitBuiltinPOW(gimple_statement_d *stmt);
- Value *EmitBuiltinLCEIL(gimple_statement_d *stmt);
- Value *EmitBuiltinLFLOOR(gimple_statement_d *stmt);
-
- bool EmitBuiltinConstantP(gimple_statement_d *stmt, Value *&Result);
- bool EmitBuiltinAlloca(gimple_statement_d *stmt, Value *&Result);
- bool EmitBuiltinExpect(gimple_statement_d *stmt, Value *&Result);
- bool EmitBuiltinExtendPointer(gimple_statement_d *stmt, Value *&Result);
- bool EmitBuiltinVAStart(gimple_statement_d *stmt);
- bool EmitBuiltinVAEnd(gimple_statement_d *stmt);
- bool EmitBuiltinVACopy(gimple_statement_d *stmt);
- bool EmitBuiltinMemCopy(gimple_statement_d *stmt, Value *&Result,
- bool isMemMove, bool SizeCheck);
- bool EmitBuiltinMemSet(gimple_statement_d *stmt, Value *&Result,
- bool SizeCheck);
- bool EmitBuiltinBZero(gimple_statement_d *stmt, Value *&Result);
- bool EmitBuiltinPrefetch(gimple_statement_d *stmt);
- bool EmitBuiltinReturnAddr(gimple_statement_d *stmt, Value *&Result,
- bool isFrame);
- bool EmitBuiltinExtractReturnAddr(gimple_statement_d *stmt, Value *&Result);
- bool EmitBuiltinFrobReturnAddr(gimple_statement_d *stmt, Value *&Result);
- bool EmitBuiltinStackSave(gimple_statement_d *stmt, Value *&Result);
- bool EmitBuiltinStackRestore(gimple_statement_d *stmt);
- bool EmitBuiltinEHPointer(gimple_statement_d *stmt, Value *&Result);
- bool EmitBuiltinDwarfCFA(gimple_statement_d *stmt, Value *&Result);
- bool EmitBuiltinDwarfSPColumn(gimple_statement_d *stmt, Value *&Result);
- bool EmitBuiltinEHReturnDataRegno(gimple_statement_d *stmt, Value *&Result);
- bool EmitBuiltinEHReturn(gimple_statement_d *stmt, Value *&Result);
- bool EmitBuiltinInitDwarfRegSizes(gimple_statement_d *stmt, Value *&Result);
- bool EmitBuiltinUnwindInit(gimple_statement_d *stmt, Value *&Result);
- bool EmitBuiltinAdjustTrampoline(gimple_statement_d *stmt, Value *&Result);
- bool EmitBuiltinInitTrampoline(gimple_statement_d *stmt, Value *&Result);
-
- // Complex Math Expressions.
- Value *CreateComplex(Value *Real, Value *Imag, tree_node *elt_type);
- void SplitComplex(Value *Complex, Value *&Real, Value *&Imag,
- tree_node *elt_type);
-
- // L-Value Expressions.
- LValue EmitLV_ARRAY_REF(tree_node *exp);
- LValue EmitLV_BIT_FIELD_REF(tree_node *exp);
- LValue EmitLV_COMPONENT_REF(tree_node *exp);
- LValue EmitLV_DECL(tree_node *exp);
- LValue EmitLV_INDIRECT_REF(tree_node *exp);
- LValue EmitLV_VIEW_CONVERT_EXPR(tree_node *exp);
- LValue EmitLV_WITH_SIZE_EXPR(tree_node *exp);
- LValue EmitLV_XXXXPART_EXPR(tree_node *exp, unsigned Idx);
- LValue EmitLV_SSA_NAME(tree_node *exp);
- LValue EmitLV_TARGET_MEM_REF(tree_node *exp);
-
- // Constant Expressions.
- Value *EmitINTEGER_CST(tree_node *exp);
- Value *EmitREAL_CST(tree_node *exp);
- Value *EmitCONSTRUCTOR(tree_node *exp, const MemRef *DestLoc);
-
-
- // Emit helpers.
-
- /// EmitMinInvariant - The given value is constant in this function. Return
- /// the corresponding LLVM value. Only creates code in the entry block.
- Value *EmitMinInvariant(tree_node *reg);
-
- /// EmitInvariantAddress - The given address is constant in this function.
- /// Return the corresponding LLVM value. Only creates code in the entry block.
- Value *EmitInvariantAddress(tree_node *addr);
-
- /// EmitRegisterConstant - Convert the given global constant of register type
- /// to an LLVM constant. Creates no code, only constants.
- Constant *EmitRegisterConstant(tree_node *reg);
-
- /// EmitComplexRegisterConstant - Turn the given COMPLEX_CST into an LLVM
- /// constant of the corresponding register type.
- Constant *EmitComplexRegisterConstant(tree_node *reg);
-
- /// EmitIntegerRegisterConstant - Turn the given INTEGER_CST into an LLVM
- /// constant of the corresponding register type.
- Constant *EmitIntegerRegisterConstant(tree_node *reg);
-
- /// EmitRealRegisterConstant - Turn the given REAL_CST into an LLVM constant
- /// of the corresponding register type.
- Constant *EmitRealRegisterConstant(tree_node *reg);
-
- /// EmitConstantVectorConstructor - Turn the given constant CONSTRUCTOR into
- /// an LLVM constant of the corresponding vector register type.
- Constant *EmitConstantVectorConstructor(tree_node *reg);
-
- /// EmitVectorRegisterConstant - Turn the given VECTOR_CST into an LLVM
- /// constant of the corresponding register type.
- Constant *EmitVectorRegisterConstant(tree_node *reg);
-
- /// Mem2Reg - Convert a value of in-memory type (that given by ConvertType)
- /// to in-register type (that given by GetRegType). TODO: Eliminate these
- /// methods: "memory" values should never be held in registers. Currently
- /// this is mainly used for marshalling function parameters and return values,
- /// but that should be completely independent of the reg vs mem value logic.
- Value *Mem2Reg(Value *V, tree_node *type, LLVMBuilder &Builder);
- Constant *Mem2Reg(Constant *C, tree_node *type, TargetFolder &Folder);
-
- /// Reg2Mem - Convert a value of in-register type (that given by GetRegType)
- /// to in-memory type (that given by ConvertType). TODO: Eliminate this
- /// method: "memory" values should never be held in registers. Currently
- /// this is mainly used for marshalling function parameters and return values,
- /// but that should be completely independent of the reg vs mem value logic.
- Value *Reg2Mem(Value *V, tree_node *type, LLVMBuilder &Builder);
-
- /// EmitMemory - Convert the specified gimple register or local constant of
- /// register type to an LLVM value with in-memory type (given by ConvertType).
- /// TODO: Eliminate this method, see Mem2Reg and Reg2Mem above.
- Value *EmitMemory(tree_node *reg);
-
- /// LoadRegisterFromMemory - Loads a value of the given scalar GCC type from
- /// the memory location pointed to by Loc. Takes care of adjusting for any
- /// differences between in-memory and in-register types (the returned value
- /// is of in-register type, as returned by GetRegType).
- Value *LoadRegisterFromMemory(MemRef Loc, tree_node *type,
- LLVMBuilder &Builder);
-
- /// StoreRegisterToMemory - Stores the given value to the memory pointed to by
- /// Loc. Takes care of adjusting for any differences between the value's type
- /// (which is the in-register type given by GetRegType) and the in-memory type.
- void StoreRegisterToMemory(Value *V, MemRef Loc, tree_node *type,
- LLVMBuilder &Builder);
-
-private:
- // Optional target defined builtin intrinsic expanding function.
- bool TargetIntrinsicLower(gimple_statement_d *stmt,
- tree_node *fndecl,
- const MemRef *DestLoc,
- Value *&Result,
- const Type *ResultType,
- std::vector<Value*> &Ops);
-
-public:
- // Helper for taking the address of a label.
- Constant *EmitLV_LABEL_DECL(tree_node *exp);
-};
-
-#endif /* LLVM_INTERNAL_H */
Removed: dragonegg/trunk/llvm-tree.cpp
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/llvm-tree.cpp?rev=127407&view=auto
==============================================================================
--- dragonegg/trunk/llvm-tree.cpp (original)
+++ dragonegg/trunk/llvm-tree.cpp (removed)
@@ -1,143 +0,0 @@
-//===---- llvm-tree.cpp - Utility functions for working with GCC trees ----===//
-//
-// Copyright (C) 2010, 2011 Duncan Sands.
-//
-// This file is part of DragonEgg.
-//
-// DragonEgg is free software; you can redistribute it and/or modify it under
-// the terms of the GNU General Public License as published by the Free Software
-// Foundation; either version 2, or (at your option) any later version.
-//
-// DragonEgg is distributed in the hope that it will be useful, but WITHOUT ANY
-// WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
-// A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-//
-// You should have received a copy of the GNU General Public License along with
-// DragonEgg; see the file COPYING. If not, write to the Free Software
-// Foundation, 51 Franklin Street, Suite 500, Boston, MA 02110-1335, USA.
-//
-//===----------------------------------------------------------------------===//
-// This file defines utility functions for working with GCC trees.
-//===----------------------------------------------------------------------===//
-
-// Plugin headers
-#include "llvm-tree.h"
-
-// LLVM headers
-#include "llvm/ADT/Twine.h"
-
-// System headers
-#include <gmp.h>
-
-// GCC headers
-extern "C" {
-#include "config.h"
-// Stop GCC declaring 'getopt' as it can clash with the system's declaration.
-#undef HAVE_DECL_GETOPT
-#include "system.h"
-#include "coretypes.h"
-#include "target.h"
-#include "tree.h"
-
-#include "flags.h"
-}
-
-using namespace llvm;
-
-/// concatIfNotEmpty - Concatenate the given strings if they are both non-empty.
-/// Otherwise return the empty string.
-static std::string concatIfNotEmpty(const std::string &Left,
- const std::string &Right) {
- if (Left.empty() || Right.empty())
- return std::string();
- return Left + Right;
-}
-
-/// getDescriptiveName - Return a helpful name for the given tree, or an empty
-/// string if no sensible name was found. These names are used to make the IR
-/// more readable, and have no official status.
-std::string getDescriptiveName(tree t) {
- if (!t) return std::string(); // Occurs when recursing.
-
- // Name identifier nodes after their contents. This gives the desired effect
- // when called recursively.
- if (TREE_CODE(t) == IDENTIFIER_NODE)
- return std::string(IDENTIFIER_POINTER(t), IDENTIFIER_LENGTH(t));
-
- // Handle declarations of all kinds.
- if (DECL_P(t)) {
- // If the declaration comes with a name then use it.
- if (DECL_NAME(t)) // Always an identifier node.
- return std::string(IDENTIFIER_POINTER(DECL_NAME(t)),
- IDENTIFIER_LENGTH(DECL_NAME(t)));
- // Use a generic name for function results.
- if (TREE_CODE(t) == RESULT_DECL)
- return "<retval>";
- // Labels have their own numeric unique identifiers.
- if (TREE_CODE(t) == LABEL_DECL && LABEL_DECL_UID(t) != -1) {
- Twine LUID(LABEL_DECL_UID(t));
- return ("L" + LUID).str();
- }
- // Otherwise use the generic UID.
- const char *Annotation = TREE_CODE(t) == CONST_DECL ? "C." : "D.";
- Twine UID(DECL_UID(t));
- return (Annotation + UID).str();
- }
-
- // Handle types of all kinds.
- if (TYPE_P(t)) {
- // If the type comes with a name then use it.
- const std::string &TypeName = getDescriptiveName(TYPE_NAME(t));
- if (!TypeName.empty()) {
- // Annotate the name with a description of the type's class.
- if (TREE_CODE(t) == ENUMERAL_TYPE)
- return "enum." + TypeName;
- if (TREE_CODE(t) == RECORD_TYPE)
- return "struct." + TypeName;
- if (TREE_CODE(t) == QUAL_UNION_TYPE)
- return "qualunion." + TypeName;
- if (TREE_CODE(t) == UNION_TYPE)
- return "union." + TypeName;
- return TypeName;
- }
-
- // Try to deduce a useful name.
- if (TREE_CODE(t) == ARRAY_TYPE)
- // If the element type is E, name the array E[] (regardless of the number
- // of dimensions).
- return concatIfNotEmpty(getDescriptiveName(TREE_TYPE(t)), "[]");
- if (TREE_CODE(t) == COMPLEX_TYPE)
- // If the element type is E, name the complex number complex.E.
- return concatIfNotEmpty("complex.", getDescriptiveName(TREE_TYPE(t)));
- if (TREE_CODE(t) == POINTER_TYPE)
- // If the element type is E, name the pointer E*.
- return concatIfNotEmpty(getDescriptiveName(TREE_TYPE(t)), "*");
- if (TREE_CODE(t) == REFERENCE_TYPE)
- // If the element type is E, name the reference E&.
- return concatIfNotEmpty(getDescriptiveName(TREE_TYPE(t)), "&");
-
- return TypeName;
- }
-
- // Handle SSA names.
- if (TREE_CODE(t) == SSA_NAME) {
- Twine NameVersion(SSA_NAME_VERSION(t));
- return concatIfNotEmpty(getDescriptiveName(SSA_NAME_VAR(t)),
- ("_" + NameVersion).str());
- }
-
- // A mysterious tree, just give up.
- return std::string();
-}
-
-/// hasNUW - Return whether overflowing unsigned operations on this type result
-/// in undefined behaviour.
-bool hasNUW(tree type) {
- return TYPE_UNSIGNED(type) && !TYPE_OVERFLOW_WRAPS(type);
-}
-
-/// hasNSW - Return whether overflowing signed operations on this type result
-/// in undefined behaviour.
-bool hasNSW(tree type) {
- return !TYPE_UNSIGNED(type) && !TYPE_OVERFLOW_WRAPS(type);
-}
Removed: dragonegg/trunk/llvm-tree.h
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/llvm-tree.h?rev=127407&view=auto
==============================================================================
--- dragonegg/trunk/llvm-tree.h (original)
+++ dragonegg/trunk/llvm-tree.h (removed)
@@ -1,44 +0,0 @@
-//=-- llvm-tree.h - Utility functions for working with GCC trees --*- C++ -*-=//
-//
-// Copyright (C) 2010, 2011 Duncan Sands.
-//
-// This file is part of DragonEgg.
-//
-// DragonEgg is free software; you can redistribute it and/or modify it under
-// the terms of the GNU General Public License as published by the Free Software
-// Foundation; either version 2, or (at your option) any later version.
-//
-// DragonEgg is distributed in the hope that it will be useful, but WITHOUT ANY
-// WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
-// A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-//
-// You should have received a copy of the GNU General Public License along with
-// DragonEgg; see the file COPYING. If not, write to the Free Software
-// Foundation, 51 Franklin Street, Suite 500, Boston, MA 02110-1335, USA.
-//
-//===----------------------------------------------------------------------===//
-// This file declares utility functions for working with GCC trees.
-//===----------------------------------------------------------------------===//
-
-#ifndef LLVM_TREE_H
-#define LLVM_TREE_H
-
-// System headers
-#include <string>
-
-union tree_node;
-
-/// getDescriptiveName - Return a helpful name for the given tree, or an empty
-/// string if no sensible name was found. These names are used to make the IR
-/// more readable, and have no official status.
-std::string getDescriptiveName(union tree_node *t);
-
-/// hasNUW - Return whether overflowing unsigned operations on this type result
-/// in undefined behaviour.
-bool hasNUW(tree_node *type);
-
-/// hasNSW - Return whether overflowing signed operations on this type result
-/// in undefined behaviour.
-bool hasNSW(tree_node *type);
-
-#endif /* LLVM_TREE_H */
Removed: dragonegg/trunk/llvm-types.cpp
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/llvm-types.cpp?rev=127407&view=auto
==============================================================================
--- dragonegg/trunk/llvm-types.cpp (original)
+++ dragonegg/trunk/llvm-types.cpp (removed)
@@ -1,2076 +0,0 @@
-//===--------- llvm-type.cpp - Converting GCC types to LLVM types ---------===//
-//
-// Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 Chris Lattner,
-// Duncan Sands et al.
-//
-// This file is part of DragonEgg.
-//
-// DragonEgg is free software; you can redistribute it and/or modify it under
-// the terms of the GNU General Public License as published by the Free Software
-// Foundation; either version 2, or (at your option) any later version.
-//
-// DragonEgg is distributed in the hope that it will be useful, but WITHOUT ANY
-// WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
-// A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-//
-// You should have received a copy of the GNU General Public License along with
-// DragonEgg; see the file COPYING. If not, write to the Free Software
-// Foundation, 51 Franklin Street, Suite 500, Boston, MA 02110-1335, USA.
-//
-//===----------------------------------------------------------------------===//
-// This is the code that converts GCC tree types into LLVM types.
-//===----------------------------------------------------------------------===//
-
-// Plugin headers
-#include "llvm-abi.h"
-#include "llvm-tree.h"
-extern "C" {
-#include "llvm-cache.h"
-}
-
-// LLVM headers
-#include "llvm/Module.h"
-#include "llvm/Assembly/Writer.h"
-#include "llvm/Support/ErrorHandling.h"
-#include "llvm/Support/raw_ostream.h"
-#include "llvm/Target/TargetMachine.h"
-
-// System headers
-#include <gmp.h>
-#include <map>
-
-// GCC headers
-extern "C" {
-#include "config.h"
-// Stop GCC declaring 'getopt' as it can clash with the system's declaration.
-#undef HAVE_DECL_GETOPT
-#include "system.h"
-#include "coretypes.h"
-#include "target.h"
-#include "tree.h"
-}
-
-// NoLength - Special value used to indicate that an array has variable or
-// unknown length.
-static const uint64_t NoLength = ~(uint64_t)0;
-
-static LLVMContext &Context = getGlobalContext();
-
-//===----------------------------------------------------------------------===//
-// Matching LLVM types with GCC trees
-//===----------------------------------------------------------------------===//
-
-// GET_TYPE_LLVM/SET_TYPE_LLVM - Associate an LLVM type with each TREE type.
-// These are lazily computed by ConvertType.
-
-const Type *llvm_set_type(tree Tr, const Type *Ty) {
- assert(TYPE_P(Tr) && "Expected a gcc type!");
-
- // Check that the LLVM and GCC types have the same size, or, if the type has
- // variable size, that the LLVM type is not bigger than any possible value of
- // the GCC type.
-#ifndef NDEBUG
- if (TYPE_SIZE(Tr) && Ty->isSized() && isInt64(TYPE_SIZE(Tr), true)) {
- uint64_t LLVMSize = getTargetData().getTypeAllocSizeInBits(Ty);
- if (getInt64(TYPE_SIZE(Tr), true) != LLVMSize) {
- errs() << "GCC: ";
- debug_tree(Tr);
- errs() << "LLVM: ";
- Ty->print(errs());
- errs() << " (" << LLVMSize << " bits)\n";
- llvm_unreachable("LLVM type size doesn't match GCC type size!");
- }
- }
-#endif
-
- return (const Type *)llvm_set_cached(Tr, Ty);
-}
-
-#define SET_TYPE_LLVM(NODE, TYPE) llvm_set_type(NODE, TYPE)
-
-const Type *llvm_get_type(tree Tr) {
- assert(TYPE_P(Tr) && "Expected a gcc type!");
- return (const Type *)llvm_get_cached(Tr);
-}
-
-#define GET_TYPE_LLVM(NODE) llvm_get_type(NODE)
-
-//TODO// Read LLVM Types string table
-//TODOvoid readLLVMTypesStringTable() {
-//TODO
-//TODO GlobalValue *V = TheModule->getNamedGlobal("llvm.pch.types");
-//TODO if (!V)
-//TODO return;
-//TODO
-//TODO // Value *GV = TheModule->getValueSymbolTable().lookup("llvm.pch.types");
-//TODO GlobalVariable *GV = cast<GlobalVariable>(V);
-//TODO ConstantStruct *LTypesNames = cast<ConstantStruct>(GV->getOperand(0));
-//TODO
-//TODO for (unsigned i = 0; i < LTypesNames->getNumOperands(); ++i) {
-//TODO const Type *Ty = NULL;
-//TODO
-//TODO if (ConstantArray *CA =
-//TODO dyn_cast<ConstantArray>(LTypesNames->getOperand(i))) {
-//TODO std::string Str = CA->getAsString();
-//TODO Ty = TheModule->getTypeByName(Str);
-//TODO assert (Ty != NULL && "Invalid Type in LTypes string table");
-//TODO }
-//TODO // If V is not a string then it is empty. Insert NULL to represent
-//TODO // empty entries.
-//TODO LTypes.push_back(Ty);
-//TODO }
-//TODO
-//TODO // Now, llvm.pch.types value is not required so remove it from the symbol
-//TODO // table.
-//TODO GV->eraseFromParent();
-//TODO}
-//TODO
-//TODO
-//TODO// GCC tree's uses LTypes vector's index to reach LLVM types.
-//TODO// Create a string table to hold these LLVM types' names. This string
-//TODO// table will be used to recreate LTypes vector after loading PCH.
-//TODOvoid writeLLVMTypesStringTable() {
-//TODO
-//TODO if (LTypes.empty())
-//TODO return;
-//TODO
-//TODO std::vector<Constant *> LTypesNames;
-//TODO std::map < const Type *, std::string > TypeNameMap;
-//TODO
-//TODO // Collect Type Names in advance.
-//TODO const TypeSymbolTable &ST = TheModule->getTypeSymbolTable();
-//TODO TypeSymbolTable::const_iterator TI = ST.begin();
-//TODO for (; TI != ST.end(); ++TI) {
-//TODO TypeNameMap[TI->second] = TI->first;
-//TODO }
-//TODO
-//TODO // Populate LTypesNames vector.
-//TODO for (std::vector<const Type *>::iterator I = LTypes.begin(),
-//TODO E = LTypes.end(); I != E; ++I) {
-//TODO const Type *Ty = *I;
-//TODO
-//TODO // Give names to nameless types.
-//TODO if (Ty && TypeNameMap[Ty].empty()) {
-//TODO std::string NewName =
-//TODO TheModule->getTypeSymbolTable().getUniqueName("llvm.fe.ty");
-//TODO TheModule->addTypeName(NewName, Ty);
-//TODO TypeNameMap[*I] = NewName;
-//TODO }
-//TODO
-//TODO const std::string &TypeName = TypeNameMap[*I];
-//TODO LTypesNames.push_back(ConstantArray::get(Context, TypeName, false));
-//TODO }
-//TODO
-//TODO // Create string table.
-//TODO Constant *LTypesNameTable = ConstantStruct::get(Context, LTypesNames, false);
-//TODO
-//TODO // Create variable to hold this string table.
-//TODO GlobalVariable *GV = new GlobalVariable(*TheModule,
-//TODO LTypesNameTable->getType(), true,
-//TODO GlobalValue::ExternalLinkage,
-//TODO LTypesNameTable,
-//TODO "llvm.pch.types");
-//TODO}
-
-//===----------------------------------------------------------------------===//
-// Recursive Type Handling Code and Data
-//===----------------------------------------------------------------------===//
-
-// Recursive types are a major pain to handle for a couple of reasons. Because
-// of this, when we start parsing a struct or a union, we globally change how
-// POINTER_TYPE and REFERENCE_TYPE are handled. In particular, instead of
-// actually recursing and computing the type they point to, they will return an
-// opaque*, and remember that they did this in PointersToReresolve.
-
-
-/// GetFunctionType - This is just a helper like FunctionType::get but that
-/// takes PATypeHolders.
-static FunctionType *GetFunctionType(const PATypeHolder &Res,
- std::vector<PATypeHolder> &ArgTys,
- bool isVarArg) {
- std::vector<const Type*> ArgTysP;
- ArgTysP.reserve(ArgTys.size());
- for (unsigned i = 0, e = ArgTys.size(); i != e; ++i)
- ArgTysP.push_back(ArgTys[i]);
-
- return FunctionType::get(Res, ArgTysP, isVarArg);
-}
-
-//===----------------------------------------------------------------------===//
-// Type Conversion Utilities
-//===----------------------------------------------------------------------===//
-
-/// ArrayLengthOf - Returns the length of the given gcc array type, or NoLength
-// if the array has variable or unknown length.
-uint64_t ArrayLengthOf(tree type) {
- assert(TREE_CODE(type) == ARRAY_TYPE && "Only for array types!");
- // If the element type has variable size and the array type has variable
- // length, but by some miracle the product gives a constant size, then we
- // also return NoLength here. I can live with this, and I bet you can too!
- if (!isInt64(TYPE_SIZE(type), true) ||
- !isInt64(TYPE_SIZE(TREE_TYPE(type)), true))
- return NoLength;
- // May return zero for arrays that gcc considers to have non-zero length, but
- // only if the array type has zero size (this can happen if the element type
- // has zero size), in which case the discrepancy doesn't matter.
- //
- // If the user increased the alignment of the element type, then the size of
- // the array type is rounded up by that alignment, but the size of the element
- // is not. Since gcc requires the user alignment to be strictly smaller than
- // the element size, this does not impact the length computation.
- return integer_zerop(TYPE_SIZE(type)) ? 0 : getInt64(TYPE_SIZE(type), true) /
- getInt64(TYPE_SIZE(TREE_TYPE(type)), true);
-}
-
-/// getFieldOffsetInBits - Return the bit offset of a FIELD_DECL in a structure.
-uint64_t getFieldOffsetInBits(tree field) {
- assert(OffsetIsLLVMCompatible(field) && "Offset is not constant!");
- uint64_t Result = getInt64(DECL_FIELD_BIT_OFFSET(field), true);
- Result += getInt64(DECL_FIELD_OFFSET(field), true) * BITS_PER_UNIT;
- return Result;
-}
-
-/// GetUnitPointerType - Returns an LLVM pointer type which points to memory one
-/// address unit wide. For example, on a machine which has 16 bit bytes returns
-/// an i16*.
-const Type *GetUnitPointerType(LLVMContext &C, unsigned AddrSpace) {
- assert(!(BITS_PER_UNIT & 7) && "Unit size not a multiple of 8 bits!");
- return IntegerType::get(C, BITS_PER_UNIT)->getPointerTo(AddrSpace);
-}
-
-// isPassedByInvisibleReference - Return true if an argument of the specified
-// type should be passed in by invisible reference.
-//
-bool isPassedByInvisibleReference(tree Type) {
- // Don't crash in this case.
- if (Type == error_mark_node)
- return false;
-
- // FIXME: Search for TREE_ADDRESSABLE in calls.c, and see if there are other
- // cases that make arguments automatically passed in by reference.
- return TREE_ADDRESSABLE(Type) || TYPE_SIZE(Type) == 0 ||
- TREE_CODE(TYPE_SIZE(Type)) != INTEGER_CST;
-}
-
-/// isSequentialCompatible - Return true if the specified gcc array or pointer
-/// type and the corresponding LLVM SequentialType lay out their components
-/// identically in memory, so doing a GEP accesses the right memory location.
-/// We assume that objects without a known size do not.
-bool isSequentialCompatible(tree type) {
- assert((TREE_CODE(type) == ARRAY_TYPE ||
- TREE_CODE(type) == POINTER_TYPE ||
- TREE_CODE(type) == REFERENCE_TYPE) && "not a sequential type!");
- // This relies on gcc types with constant size mapping to LLVM types with the
- // same size. It is possible for the component type not to have a size:
- // struct foo; extern foo bar[];
- return isInt64(TYPE_SIZE(TREE_TYPE(type)), true);
-}
-
-/// OffsetIsLLVMCompatible - Return true if the given field is offset from the
-/// start of the record by a constant amount which is not humongously big.
-bool OffsetIsLLVMCompatible(tree field_decl) {
- return isInt64(DECL_FIELD_OFFSET(field_decl), true);
-}
-
-/// isBitfield - Returns whether to treat the specified field as a bitfield.
-bool isBitfield(tree_node *field_decl) {
- if (!DECL_BIT_FIELD(field_decl))
- return false;
-
- // A bitfield. But do we need to treat it as one?
-
- assert(DECL_FIELD_BIT_OFFSET(field_decl) && "Bitfield with no bit offset!");
- if (TREE_INT_CST_LOW(DECL_FIELD_BIT_OFFSET(field_decl)) & 7)
- // Does not start on a byte boundary - must treat as a bitfield.
- return true;
-
- if (!isInt64(TYPE_SIZE (TREE_TYPE(field_decl)), true))
- // No size or variable sized - play safe, treat as a bitfield.
- return true;
-
- uint64_t TypeSizeInBits = getInt64(TYPE_SIZE (TREE_TYPE(field_decl)), true);
- assert(!(TypeSizeInBits & 7) && "A type with a non-byte size!");
-
- assert(DECL_SIZE(field_decl) && "Bitfield with no bit size!");
- uint64_t FieldSizeInBits = getInt64(DECL_SIZE(field_decl), true);
- if (FieldSizeInBits < TypeSizeInBits)
- // Not wide enough to hold the entire type - treat as a bitfield.
- return true;
-
- return false;
-}
-
-/// refine_type_to - Cause all users of the opaque type old_type to switch
-/// to the more concrete type new_type.
-void refine_type_to(tree old_type, tree new_type)
-{
- const OpaqueType *OldTy = cast_or_null<OpaqueType>(GET_TYPE_LLVM(old_type));
- if (OldTy) {
- const Type *NewTy = ConvertType (new_type);
- const_cast<OpaqueType*>(OldTy)->refineAbstractTypeTo(NewTy);
- }
-}
-
-
-//===----------------------------------------------------------------------===//
-// Abstract Type Refinement Helpers
-//===----------------------------------------------------------------------===//
-//
-// This code is built to make sure that the TYPE_LLVM field on tree types are
-// updated when LLVM types are refined. This prevents dangling pointers from
-// occuring due to type coallescing.
-//
-namespace {
- class TypeRefinementDatabase : public AbstractTypeUser {
- virtual void refineAbstractType(const DerivedType *OldTy,
- const Type *NewTy);
- virtual void typeBecameConcrete(const DerivedType *AbsTy);
-
- // TypeUsers - For each abstract LLVM type, we keep track of all of the GCC
- // types that point to it.
- std::map<const Type*, std::vector<tree> > TypeUsers;
- public:
- /// setType - call SET_TYPE_LLVM(type, Ty), associating the type with the
- /// specified tree type. In addition, if the LLVM type is an abstract type,
- /// we add it to our data structure to track it.
- inline const Type *setType(tree type, const Type *Ty) {
- if (GET_TYPE_LLVM(type))
- RemoveTypeFromTable(type);
-
- if (Ty->isAbstract()) {
- std::vector<tree> &Users = TypeUsers[Ty];
- if (Users.empty()) Ty->addAbstractTypeUser(this);
- Users.push_back(type);
- }
- return SET_TYPE_LLVM(type, Ty);
- }
-
- void RemoveTypeFromTable(tree type);
- void dump() const;
- };
-
- /// TypeDB - The main global type database.
- TypeRefinementDatabase TypeDB;
-}
-
-/// RemoveTypeFromTable - We're about to change the LLVM type of 'type'
-///
-void TypeRefinementDatabase::RemoveTypeFromTable(tree type) {
- const Type *Ty = GET_TYPE_LLVM(type);
- if (!Ty->isAbstract()) return;
- std::map<const Type*, std::vector<tree> >::iterator I = TypeUsers.find(Ty);
- assert(I != TypeUsers.end() && "Using an abstract type but not in table?");
-
- bool FoundIt = false;
- for (unsigned i = 0, e = I->second.size(); i != e; ++i)
- if (I->second[i] == type) {
- FoundIt = true;
- std::swap(I->second[i], I->second.back());
- I->second.pop_back();
- break;
- }
- assert(FoundIt && "Using an abstract type but not in table?");
-
- // If the type plane is now empty, nuke it.
- if (I->second.empty()) {
- TypeUsers.erase(I);
- Ty->removeAbstractTypeUser(this);
- }
-}
-
-/// refineAbstractType - The callback method invoked when an abstract type is
-/// resolved to another type. An object must override this method to update
-/// its internal state to reference NewType instead of OldType.
-///
-void TypeRefinementDatabase::refineAbstractType(const DerivedType *OldTy,
- const Type *NewTy) {
- if (OldTy == NewTy && OldTy->isAbstract()) return; // Nothing to do.
-
- std::map<const Type*, std::vector<tree> >::iterator I = TypeUsers.find(OldTy);
- assert(I != TypeUsers.end() && "Using an abstract type but not in table?");
-
- if (!NewTy->isAbstract()) {
- // If the type became concrete, update everything pointing to it, and remove
- // all of our entries from the map.
- if (OldTy != NewTy)
- for (unsigned i = 0, e = I->second.size(); i != e; ++i)
- SET_TYPE_LLVM(I->second[i], NewTy);
- } else {
- // Otherwise, it was refined to another instance of an abstract type. Move
- // everything over and stop monitoring OldTy.
- std::vector<tree> &NewSlot = TypeUsers[NewTy];
- if (NewSlot.empty()) NewTy->addAbstractTypeUser(this);
-
- for (unsigned i = 0, e = I->second.size(); i != e; ++i) {
- NewSlot.push_back(I->second[i]);
- SET_TYPE_LLVM(I->second[i], NewTy);
- }
- }
-
- TypeUsers.erase(I);
-
- // Next, remove OldTy's entry in the TargetData object if it has one.
- if (const StructType *STy = dyn_cast<StructType>(OldTy))
- getTargetData().InvalidateStructLayoutInfo(STy);
-
- OldTy->removeAbstractTypeUser(this);
-}
-
-/// The other case which AbstractTypeUsers must be aware of is when a type
-/// makes the transition from being abstract (where it has clients on it's
-/// AbstractTypeUsers list) to concrete (where it does not). This method
-/// notifies ATU's when this occurs for a type.
-///
-void TypeRefinementDatabase::typeBecameConcrete(const DerivedType *AbsTy) {
- assert(TypeUsers.count(AbsTy) && "Not using this type!");
- // Remove the type from our collection of tracked types.
- TypeUsers.erase(AbsTy);
- AbsTy->removeAbstractTypeUser(this);
-}
-void TypeRefinementDatabase::dump() const {
- outs() << "TypeRefinementDatabase\n";
- outs().flush();
-}
-
-//===----------------------------------------------------------------------===//
-// Helper Routines
-//===----------------------------------------------------------------------===//
-
-/// GetFieldIndex - Return the index of the field in the given LLVM type that
-/// corresponds to the GCC field declaration 'decl'. This means that the LLVM
-/// and GCC fields start in the same byte (if 'decl' is a bitfield, this means
-/// that its first bit is within the byte the LLVM field starts at). Returns
-/// INT_MAX if there is no such LLVM field.
-int GetFieldIndex(tree decl, const Type *Ty) {
- assert(TREE_CODE(decl) == FIELD_DECL && "Expected a FIELD_DECL!");
- assert(Ty == ConvertType(DECL_CONTEXT(decl)) && "Field not for this type!");
-
- // If we previously cached the field index, return the cached value.
- unsigned Index = (unsigned)get_decl_index(decl);
- if (Index <= INT_MAX)
- return Index;
-
- // TODO: At this point we could process all fields of DECL_CONTEXT(decl), and
- // incrementally advance over the StructLayout. This would make indexing be
- // O(N) rather than O(N log N) if all N fields are used. It's not clear if it
- // would really be a win though.
-
- const StructType *STy = dyn_cast<StructType>(Ty);
- // If this is not a struct type, then for sure there is no corresponding LLVM
- // field (we do not require GCC record types to be converted to LLVM structs).
- if (!STy)
- return set_decl_index(decl, INT_MAX);
-
- // If the field declaration is at a variable or humongous offset then there
- // can be no corresponding LLVM field.
- if (!OffsetIsLLVMCompatible(decl))
- return set_decl_index(decl, INT_MAX);
-
- // Find the LLVM field that contains the first bit of the GCC field.
- uint64_t OffsetInBytes = getFieldOffsetInBits(decl) / 8; // Ignore bit in byte
- const StructLayout *SL = getTargetData().getStructLayout(STy);
- Index = SL->getElementContainingOffset(OffsetInBytes);
-
- // The GCC field must start in the first byte of the LLVM field.
- if (OffsetInBytes != SL->getElementOffset(Index))
- return set_decl_index(decl, INT_MAX);
-
- // We are not able to cache values bigger than INT_MAX, so bail out if the
- // LLVM field index is that huge.
- if (Index >= INT_MAX)
- return set_decl_index(decl, INT_MAX);
-
- // Found an appropriate LLVM field - return it.
- return set_decl_index(decl, Index);
-}
-
-/// FindLLVMTypePadding - If the specified struct has any inter-element padding,
-/// add it to the Padding array.
-static void FindLLVMTypePadding(const Type *Ty, tree type, uint64_t BitOffset,
- SmallVector<std::pair<uint64_t,uint64_t>, 16> &Padding) {
- if (const StructType *STy = dyn_cast<StructType>(Ty)) {
- const TargetData &TD = getTargetData();
- const StructLayout *SL = TD.getStructLayout(STy);
- uint64_t PrevFieldEnd = 0;
- for (unsigned i = 0, e = STy->getNumElements(); i != e; ++i) {
- // If this field is marked as being padding, then pretend it is not there.
- // This results in it (or something bigger) being added to Padding. This
- // matches the logic in CopyAggregate.
- if (type && isPaddingElement(type, i))
- continue;
-
- uint64_t FieldBitOffset = SL->getElementOffset(i)*8;
-
- // Get padding of sub-elements.
- FindLLVMTypePadding(STy->getElementType(i), 0,
- BitOffset+FieldBitOffset, Padding);
- // Check to see if there is any padding between this element and the
- // previous one.
- if (PrevFieldEnd < FieldBitOffset)
- Padding.push_back(std::make_pair(PrevFieldEnd+BitOffset,
- FieldBitOffset-PrevFieldEnd));
- PrevFieldEnd =
- FieldBitOffset + TD.getTypeSizeInBits(STy->getElementType(i));
- }
-
- // Check for tail padding.
- if (PrevFieldEnd < SL->getSizeInBits())
- Padding.push_back(std::make_pair(PrevFieldEnd,
- SL->getSizeInBits()-PrevFieldEnd));
- } else if (const ArrayType *ATy = dyn_cast<ArrayType>(Ty)) {
- uint64_t EltSize = getTargetData().getTypeSizeInBits(ATy->getElementType());
- for (unsigned i = 0, e = ATy->getNumElements(); i != e; ++i)
- FindLLVMTypePadding(ATy->getElementType(), 0, BitOffset+i*EltSize,
- Padding);
- }
-
- // primitive and vector types have no padding.
-}
-
-/// GCCTypeOverlapsWithPadding - Return true if the specified gcc type overlaps
-/// with the specified region of padding. This only needs to handle types with
-/// a constant size.
-static bool GCCTypeOverlapsWithPadding(tree type, int PadStartBits,
- int PadSizeBits) {
- assert(type != error_mark_node);
- // LLVM doesn't care about variants such as const, volatile, or restrict.
- type = TYPE_MAIN_VARIANT(type);
-
- // If the type does not overlap, don't bother checking below.
-
- if (!isInt64(TYPE_SIZE(type), true))
- // No size, negative size (!) or huge - be conservative.
- return true;
-
- if (!getInt64(TYPE_SIZE(type), true) ||
- PadStartBits >= (int64_t)getInt64(TYPE_SIZE(type), false) ||
- PadStartBits+PadSizeBits <= 0)
- return false;
-
-
- switch (TREE_CODE(type)) {
- default:
- fprintf(stderr, "Unknown type to compare:\n");
- debug_tree(type);
- abort();
- case VOID_TYPE:
- case BOOLEAN_TYPE:
- case ENUMERAL_TYPE:
- case INTEGER_TYPE:
- case REAL_TYPE:
- case COMPLEX_TYPE:
- case VECTOR_TYPE:
- case POINTER_TYPE:
- case REFERENCE_TYPE:
- case OFFSET_TYPE:
- // These types have no holes.
- return true;
-
- case ARRAY_TYPE: {
- uint64_t NumElts = ArrayLengthOf(type);
- if (NumElts == NoLength)
- return true;
- unsigned EltSizeBits = getInt64(TYPE_SIZE(TREE_TYPE(type)), true);
-
- // Check each element for overlap. This is inelegant, but effective.
- for (unsigned i = 0; i != NumElts; ++i)
- if (GCCTypeOverlapsWithPadding(TREE_TYPE(type),
- PadStartBits- i*EltSizeBits, PadSizeBits))
- return true;
- return false;
- }
- case QUAL_UNION_TYPE:
- case UNION_TYPE: {
- // If this is a union with the transparent_union attribute set, it is
- // treated as if it were just the same as its first type.
- if (TYPE_TRANSPARENT_AGGR(type)) {
- tree Field = TYPE_FIELDS(type);
- assert(Field && "Transparent union must have some elements!");
- while (TREE_CODE(Field) != FIELD_DECL) {
- Field = TREE_CHAIN(Field);
- assert(Field && "Transparent union must have some elements!");
- }
- return GCCTypeOverlapsWithPadding(TREE_TYPE(Field),
- PadStartBits, PadSizeBits);
- }
-
- // See if any elements overlap.
- for (tree Field = TYPE_FIELDS(type); Field; Field = TREE_CHAIN(Field)) {
- if (TREE_CODE(Field) != FIELD_DECL) continue;
- assert(getFieldOffsetInBits(Field) == 0 && "Union with non-zero offset?");
- // Skip fields that are known not to be present.
- if (TREE_CODE(type) == QUAL_UNION_TYPE &&
- integer_zerop(DECL_QUALIFIER(Field)))
- continue;
-
- if (GCCTypeOverlapsWithPadding(TREE_TYPE(Field),
- PadStartBits, PadSizeBits))
- return true;
-
- // Skip remaining fields if this one is known to be present.
- if (TREE_CODE(type) == QUAL_UNION_TYPE &&
- integer_onep(DECL_QUALIFIER(Field)))
- break;
- }
-
- return false;
- }
-
- case RECORD_TYPE:
- for (tree Field = TYPE_FIELDS(type); Field; Field = TREE_CHAIN(Field)) {
- if (TREE_CODE(Field) != FIELD_DECL) continue;
-
- if (!OffsetIsLLVMCompatible(Field))
- // Variable or humongous offset.
- return true;
-
- uint64_t FieldBitOffset = getFieldOffsetInBits(Field);
- if (GCCTypeOverlapsWithPadding(TREE_TYPE(Field),
- PadStartBits-FieldBitOffset, PadSizeBits))
- return true;
- }
- return false;
- }
-}
-
-bool TypeConverter::GCCTypeOverlapsWithLLVMTypePadding(tree type,
- const Type *Ty) {
-
- // Start by finding all of the padding in the LLVM Type.
- SmallVector<std::pair<uint64_t,uint64_t>, 16> StructPadding;
- FindLLVMTypePadding(Ty, type, 0, StructPadding);
-
- for (unsigned i = 0, e = StructPadding.size(); i != e; ++i)
- if (GCCTypeOverlapsWithPadding(type, StructPadding[i].first,
- StructPadding[i].second))
- return true;
- return false;
-}
-
-
-//===----------------------------------------------------------------------===//
-// Main Type Conversion Routines
-//===----------------------------------------------------------------------===//
-
-const Type *TypeConverter::ConvertType(tree type) {
- if (type == error_mark_node) return Type::getInt32Ty(Context);
-
- // LLVM doesn't care about variants such as const, volatile, or restrict.
- type = TYPE_MAIN_VARIANT(type);
- const Type *Ty;
-
- switch (TREE_CODE(type)) {
- default:
- debug_tree(type);
- llvm_unreachable("Unknown type to convert!");
-
- case VOID_TYPE:
- Ty = SET_TYPE_LLVM(type, Type::getVoidTy(Context));
- break;
-
- case RECORD_TYPE:
- case QUAL_UNION_TYPE:
- case UNION_TYPE:
- Ty = ConvertRECORD(type);
- break;
-
- case ENUMERAL_TYPE:
- // Use of an enum that is implicitly declared?
- if (TYPE_SIZE(type) == 0) {
- // If we already compiled this type, use the old type.
- if ((Ty = GET_TYPE_LLVM(type)))
- return Ty;
-
- Ty = OpaqueType::get(Context);
- Ty = TypeDB.setType(type, Ty);
- break;
- }
- // FALL THROUGH.
- case BOOLEAN_TYPE:
- case INTEGER_TYPE: {
- if ((Ty = GET_TYPE_LLVM(type))) return Ty;
- uint64_t Size = getInt64(TYPE_SIZE(type), true);
- Ty = SET_TYPE_LLVM(type, IntegerType::get(Context, Size));
- break;
- }
-
- case REAL_TYPE:
- if ((Ty = GET_TYPE_LLVM(type))) return Ty;
- switch (TYPE_PRECISION(type)) {
- default:
- debug_tree(type);
- llvm_unreachable("Unknown FP type!");
- case 32: Ty = SET_TYPE_LLVM(type, Type::getFloatTy(Context)); break;
- case 64: Ty = SET_TYPE_LLVM(type, Type::getDoubleTy(Context)); break;
- case 80: Ty = SET_TYPE_LLVM(type, Type::getX86_FP80Ty(Context)); break;
- case 128:
-#ifdef TARGET_POWERPC
- Ty = SET_TYPE_LLVM(type, Type::getPPC_FP128Ty(Context));
-#else
- // IEEE quad precision.
- Ty = SET_TYPE_LLVM(type, Type::getFP128Ty(Context));
-#endif
- break;
- }
- break;
-
- case COMPLEX_TYPE: {
- if ((Ty = GET_TYPE_LLVM(type))) return Ty;
- Ty = ConvertType(TREE_TYPE(type));
- assert(!Ty->isAbstract() && "should use TypeDB.setType()");
- Ty = StructType::get(Context, Ty, Ty, NULL);
- Ty = SET_TYPE_LLVM(type, Ty);
- break;
- }
-
- case VECTOR_TYPE: {
- if ((Ty = GET_TYPE_LLVM(type))) return Ty;
- Ty = ConvertType(TREE_TYPE(type));
- assert(!Ty->isAbstract() && "should use TypeDB.setType()");
- Ty = VectorType::get(Ty, TYPE_VECTOR_SUBPARTS(type));
- Ty = SET_TYPE_LLVM(type, Ty);
- break;
- }
-
- case POINTER_TYPE:
- case REFERENCE_TYPE:
- if (const PointerType *PTy = cast_or_null<PointerType>(GET_TYPE_LLVM(type))){
- // We already converted this type. If this isn't a case where we have to
- // reparse it, just return it.
- if (PointersToReresolve.empty() || PointersToReresolve.back() != type ||
- ConvertingStruct)
- return PTy;
-
- // Okay, we know that we're !ConvertingStruct and that type is on the end
- // of the vector. Remove this entry from the PointersToReresolve list and
- // get the pointee type. Note that this order is important in case the
- // pointee type uses this pointer.
- assert(PTy->getElementType()->isOpaqueTy() && "Not a deferred ref!");
-
- // We are actively resolving this pointer. We want to pop this value from
- // the stack, as we are no longer resolving it. However, we don't want to
- // make it look like we are now resolving the previous pointer on the
- // stack, so pop this value and push a null.
- PointersToReresolve.back() = 0;
-
-
- // Do not do any nested resolution. We know that there is a higher-level
- // loop processing deferred pointers, let it handle anything new.
- ConvertingStruct = true;
-
- // Note that we know that PTy cannot be resolved or invalidated here.
- const Type *Actual = ConvertType(TREE_TYPE(type));
- assert(GET_TYPE_LLVM(type) == PTy && "Pointer invalidated!");
-
- // Restore ConvertingStruct for the caller.
- ConvertingStruct = false;
-
- if (Actual->isVoidTy())
- Actual = Type::getInt8Ty(Context); // void* -> sbyte*
-
- // Update the type, potentially updating TYPE_LLVM(type).
- const OpaqueType *OT = cast<OpaqueType>(PTy->getElementType());
- const_cast<OpaqueType*>(OT)->refineAbstractTypeTo(Actual);
- Ty = GET_TYPE_LLVM(type);
- break;
- } else {
- // If we are converting a struct, and if we haven't converted the pointee
- // type, add this pointer to PointersToReresolve and return an opaque*.
- if (ConvertingStruct) {
- // If the pointee type has not already been converted to LLVM, create
- // a new opaque type and remember it in the database.
- Ty = GET_TYPE_LLVM(TYPE_MAIN_VARIANT(TREE_TYPE(type)));
- if (Ty == 0) {
- PointersToReresolve.push_back(type);
- Ty = TypeDB.setType(type,
- PointerType::getUnqual(OpaqueType::get(Context)));
- break;
- }
-
- // A type has already been computed. However, this may be some sort of
- // recursive struct. We don't want to call ConvertType on it, because
- // this will try to resolve it, and not adding the type to the
- // PointerToReresolve collection is just an optimization. Instead,
- // we'll use the type returned by GET_TYPE_LLVM directly, even if this
- // may be resolved further in the future.
- } else {
- // If we're not in a struct, just call ConvertType. If it has already
- // been converted, this will return the precomputed value, otherwise
- // this will compute and return the new type.
- Ty = ConvertType(TREE_TYPE(type));
- }
-
- if (Ty->isVoidTy())
- Ty = Type::getInt8Ty(Context); // void* -> sbyte*
- Ty = TypeDB.setType(type, Ty->getPointerTo());
- break;
- }
-
- case METHOD_TYPE:
- case FUNCTION_TYPE: {
- if ((Ty = GET_TYPE_LLVM(type)))
- return Ty;
-
- // No declaration to pass through, passing NULL.
- CallingConv::ID CallingConv;
- AttrListPtr PAL;
- Ty = TypeDB.setType(type, ConvertFunctionType(type, NULL, NULL,
- CallingConv, PAL));
- break;
- }
-
- case ARRAY_TYPE: {
- if ((Ty = GET_TYPE_LLVM(type)))
- return Ty;
-
- const Type *ElementTy = ConvertType(TREE_TYPE(type));
- uint64_t NumElements = ArrayLengthOf(type);
-
- if (NumElements == NoLength) // Variable length array?
- NumElements = 0;
-
- // Create the array type.
- Ty = ArrayType::get(ElementTy, NumElements);
-
- // If the user increased the alignment of the array element type, then the
- // size of the array is rounded up by that alignment even though the size
- // of the array element type is not (!). Correct for this if necessary by
- // adding padding. May also need padding if the element type has variable
- // size and the array type has variable length, but by a miracle the product
- // gives a constant size.
- if (isInt64(TYPE_SIZE(type), true)) {
- uint64_t PadBits = getInt64(TYPE_SIZE(type), true) -
- getTargetData().getTypeAllocSizeInBits(Ty);
- if (PadBits) {
- const Type *Padding = ArrayType::get(Type::getInt8Ty(Context), PadBits / 8);
- Ty = StructType::get(Context, Ty, Padding, NULL);
- }
- }
-
- Ty = TypeDB.setType(type, Ty);
- break;
- }
-
- case OFFSET_TYPE:
- // Handle OFFSET_TYPE specially. This is used for pointers to members,
- // which are really just integer offsets. As such, return the appropriate
- // integer directly.
- switch (getTargetData().getPointerSize()) {
- default: assert(0 && "Unknown pointer size!");
- case 4: Ty = Type::getInt32Ty(Context); break;
- case 8: Ty = Type::getInt64Ty(Context); break;
- }
- }
-
- // Try to give the type a helpful name. There is no point in doing this for
- // array and pointer types since LLVM automatically gives them a useful name
- // based on the element type.
- if (!Ty->isVoidTy() && !isa<SequentialType>(Ty)) {
- const std::string &TypeName = getDescriptiveName(type);
- if (!TypeName.empty())
- TheModule->addTypeName(TypeName, Ty);
- }
-
- return Ty;
-}
-
-//===----------------------------------------------------------------------===//
-// FUNCTION/METHOD_TYPE Conversion Routines
-//===----------------------------------------------------------------------===//
-
-namespace {
- class FunctionTypeConversion : public DefaultABIClient {
- PATypeHolder &RetTy;
- std::vector<PATypeHolder> &ArgTypes;
- CallingConv::ID &CallingConv;
- bool isShadowRet;
- bool KNRPromotion;
- unsigned Offset;
- public:
- FunctionTypeConversion(PATypeHolder &retty, std::vector<PATypeHolder> &AT,
- CallingConv::ID &CC, bool KNR)
- : RetTy(retty), ArgTypes(AT), CallingConv(CC), KNRPromotion(KNR), Offset(0) {
- CallingConv = CallingConv::C;
- isShadowRet = false;
- }
-
- /// getCallingConv - This provides the desired CallingConv for the function.
- CallingConv::ID& getCallingConv(void) { return CallingConv; }
-
- bool isShadowReturn() const { return isShadowRet; }
-
- /// HandleScalarResult - This callback is invoked if the function returns a
- /// simple scalar result value.
- void HandleScalarResult(const Type *RetTy) {
- this->RetTy = RetTy;
- }
-
- /// HandleAggregateResultAsScalar - This callback is invoked if the function
- /// returns an aggregate value by bit converting it to the specified scalar
- /// type and returning that.
- void HandleAggregateResultAsScalar(const Type *ScalarTy, unsigned Offset=0) {
- RetTy = ScalarTy;
- this->Offset = Offset;
- }
-
- /// HandleAggregateResultAsAggregate - This callback is invoked if the function
- /// returns an aggregate value using multiple return values.
- void HandleAggregateResultAsAggregate(const Type *AggrTy) {
- RetTy = AggrTy;
- }
-
- /// HandleShadowResult - Handle an aggregate or scalar shadow argument.
- void HandleShadowResult(const PointerType *PtrArgTy, bool RetPtr) {
- // This function either returns void or the shadow argument,
- // depending on the target.
- RetTy = RetPtr ? PtrArgTy : Type::getVoidTy(Context);
-
- // In any case, there is a dummy shadow argument though!
- ArgTypes.push_back(PtrArgTy);
-
- // Also, note the use of a shadow argument.
- isShadowRet = true;
- }
-
- /// HandleAggregateShadowResult - This callback is invoked if the function
- /// returns an aggregate value by using a "shadow" first parameter, which is
- /// a pointer to the aggregate, of type PtrArgTy. If RetPtr is set to true,
- /// the pointer argument itself is returned from the function.
- void HandleAggregateShadowResult(const PointerType *PtrArgTy,
- bool RetPtr) {
- HandleShadowResult(PtrArgTy, RetPtr);
- }
-
- /// HandleScalarShadowResult - This callback is invoked if the function
- /// returns a scalar value by using a "shadow" first parameter, which is a
- /// pointer to the scalar, of type PtrArgTy. If RetPtr is set to true,
- /// the pointer argument itself is returned from the function.
- void HandleScalarShadowResult(const PointerType *PtrArgTy, bool RetPtr) {
- HandleShadowResult(PtrArgTy, RetPtr);
- }
-
- void HandlePad(const llvm::Type *LLVMTy) {
- HandleScalarArgument(LLVMTy, 0, 0);
- }
-
- void HandleScalarArgument(const llvm::Type *LLVMTy, tree type,
- unsigned /*RealSize*/ = 0) {
- if (KNRPromotion) {
- if (type == float_type_node)
- LLVMTy = ConvertType(double_type_node);
- else if (LLVMTy->isIntegerTy(16) || LLVMTy->isIntegerTy(8) ||
- LLVMTy->isIntegerTy(1))
- LLVMTy = Type::getInt32Ty(Context);
- }
- ArgTypes.push_back(LLVMTy);
- }
-
- /// HandleByInvisibleReferenceArgument - This callback is invoked if a pointer
- /// (of type PtrTy) to the argument is passed rather than the argument itself.
- void HandleByInvisibleReferenceArgument(const llvm::Type *PtrTy,
- tree /*type*/) {
- ArgTypes.push_back(PtrTy);
- }
-
- /// HandleByValArgument - This callback is invoked if the aggregate function
- /// argument is passed by value. It is lowered to a parameter passed by
- /// reference with an additional parameter attribute "ByVal".
- void HandleByValArgument(const llvm::Type *LLVMTy, tree type) {
- HandleScalarArgument(LLVMTy->getPointerTo(), type);
- }
-
- /// HandleFCAArgument - This callback is invoked if the aggregate function
- /// argument is a first class aggregate passed by value.
- void HandleFCAArgument(const llvm::Type *LLVMTy, tree /*type*/) {
- ArgTypes.push_back(LLVMTy);
- }
- };
-}
-
-
-static Attributes HandleArgumentExtension(tree ArgTy) {
- if (TREE_CODE(ArgTy) == BOOLEAN_TYPE) {
- if (TREE_INT_CST_LOW(TYPE_SIZE(ArgTy)) < INT_TYPE_SIZE)
- return Attribute::ZExt;
- } else if (TREE_CODE(ArgTy) == INTEGER_TYPE &&
- TREE_INT_CST_LOW(TYPE_SIZE(ArgTy)) < INT_TYPE_SIZE) {
- if (TYPE_UNSIGNED(ArgTy))
- return Attribute::ZExt;
- else
- return Attribute::SExt;
- }
-
- return Attribute::None;
-}
-
-/// ConvertParamListToLLVMSignature - This method is used to build the argument
-/// type list for K&R prototyped functions. In this case, we have to figure out
-/// the type list (to build a FunctionType) from the actual DECL_ARGUMENTS list
-/// for the function. This method takes the DECL_ARGUMENTS list (Args), and
-/// fills in Result with the argument types for the function. It returns the
-/// specified result type for the function.
-const FunctionType *TypeConverter::
-ConvertArgListToFnType(tree type, tree Args, tree static_chain,
- CallingConv::ID &CallingConv, AttrListPtr &PAL) {
- tree ReturnType = TREE_TYPE(type);
- std::vector<PATypeHolder> ArgTys;
- PATypeHolder RetTy(Type::getVoidTy(Context));
-
- FunctionTypeConversion Client(RetTy, ArgTys, CallingConv, true /*K&R*/);
- DefaultABI ABIConverter(Client);
-
-#ifdef TARGET_ADJUST_LLVM_CC
- TARGET_ADJUST_LLVM_CC(CallingConv, type);
-#endif
-
- // Builtins are always prototyped, so this isn't one.
- ABIConverter.HandleReturnType(ReturnType, current_function_decl, false);
-
- SmallVector<AttributeWithIndex, 8> Attrs;
-
- // Compute whether the result needs to be zext or sext'd.
- Attributes RAttributes = HandleArgumentExtension(ReturnType);
-
- // Allow the target to change the attributes.
-#ifdef TARGET_ADJUST_LLVM_RETATTR
- TARGET_ADJUST_LLVM_RETATTR(RAttributes, type);
-#endif
-
- if (RAttributes != Attribute::None)
- Attrs.push_back(AttributeWithIndex::get(0, RAttributes));
-
- // If this function returns via a shadow argument, the dest loc is passed
- // in as a pointer. Mark that pointer as struct-ret and noalias.
- if (ABIConverter.isShadowReturn())
- Attrs.push_back(AttributeWithIndex::get(ArgTys.size(),
- Attribute::StructRet | Attribute::NoAlias));
-
- std::vector<const Type*> ScalarArgs;
- if (static_chain) {
- // Pass the static chain as the first parameter.
- ABIConverter.HandleArgument(TREE_TYPE(static_chain), ScalarArgs);
- // Mark it as the chain argument.
- Attrs.push_back(AttributeWithIndex::get(ArgTys.size(),
- Attribute::Nest));
- }
-
- for (; Args && TREE_TYPE(Args) != void_type_node; Args = TREE_CHAIN(Args)) {
- tree ArgTy = TREE_TYPE(Args);
-
- // Determine if there are any attributes for this param.
- Attributes PAttributes = Attribute::None;
-
- ABIConverter.HandleArgument(ArgTy, ScalarArgs, &PAttributes);
-
- // Compute zext/sext attributes.
- PAttributes |= HandleArgumentExtension(ArgTy);
-
- if (PAttributes != Attribute::None)
- Attrs.push_back(AttributeWithIndex::get(ArgTys.size(), PAttributes));
- }
-
- PAL = AttrListPtr::get(Attrs.begin(), Attrs.end());
- return GetFunctionType(RetTy, ArgTys, false);
-}
-
-const FunctionType *TypeConverter::
-ConvertFunctionType(tree type, tree decl, tree static_chain,
- CallingConv::ID &CallingConv, AttrListPtr &PAL) {
- PATypeHolder RetTy = Type::getVoidTy(Context);
- std::vector<PATypeHolder> ArgTypes;
- bool isVarArg = false;
- FunctionTypeConversion Client(RetTy, ArgTypes, CallingConv, false/*not K&R*/);
- DefaultABI ABIConverter(Client);
-
- // Allow the target to set the CC for things like fastcall etc.
-#ifdef TARGET_ADJUST_LLVM_CC
- TARGET_ADJUST_LLVM_CC(CallingConv, type);
-#endif
-
- ABIConverter.HandleReturnType(TREE_TYPE(type), current_function_decl,
- decl ? DECL_BUILT_IN(decl) : false);
-
- // Compute attributes for return type (and function attributes).
- SmallVector<AttributeWithIndex, 8> Attrs;
- Attributes FnAttributes = Attribute::None;
-
- int flags = flags_from_decl_or_type(decl ? decl : type);
-
- // Check for 'noreturn' function attribute.
- if (flags & ECF_NORETURN)
- FnAttributes |= Attribute::NoReturn;
-
- // Check for 'nounwind' function attribute.
- if (flags & ECF_NOTHROW)
- FnAttributes |= Attribute::NoUnwind;
-
- // Check for 'readnone' function attribute.
- // Both PURE and CONST will be set if the user applied
- // __attribute__((const)) to a function the compiler
- // knows to be pure, such as log. A user or (more
- // likely) libm implementor might know their local log
- // is in fact const, so this should be valid (and gcc
- // accepts it). But llvm IR does not allow both, so
- // set only ReadNone.
- if (flags & ECF_CONST)
- FnAttributes |= Attribute::ReadNone;
-
- // Check for 'readonly' function attribute.
- if (flags & ECF_PURE && !(flags & ECF_CONST))
- FnAttributes |= Attribute::ReadOnly;
-
- // Since they write the return value through a pointer,
- // 'sret' functions cannot be 'readnone' or 'readonly'.
- if (ABIConverter.isShadowReturn())
- FnAttributes &= ~(Attribute::ReadNone|Attribute::ReadOnly);
-
- // Demote 'readnone' nested functions to 'readonly' since
- // they may need to read through the static chain.
- if (static_chain && (FnAttributes & Attribute::ReadNone)) {
- FnAttributes &= ~Attribute::ReadNone;
- FnAttributes |= Attribute::ReadOnly;
- }
-
- // Compute whether the result needs to be zext or sext'd.
- Attributes RAttributes = Attribute::None;
- RAttributes |= HandleArgumentExtension(TREE_TYPE(type));
-
- // Allow the target to change the attributes.
-#ifdef TARGET_ADJUST_LLVM_RETATTR
- TARGET_ADJUST_LLVM_RETATTR(RAttributes, type);
-#endif
-
- // The value returned by a 'malloc' function does not alias anything.
- if (flags & ECF_MALLOC)
- RAttributes |= Attribute::NoAlias;
-
- if (RAttributes != Attribute::None)
- Attrs.push_back(AttributeWithIndex::get(0, RAttributes));
-
- // If this function returns via a shadow argument, the dest loc is passed
- // in as a pointer. Mark that pointer as struct-ret and noalias.
- if (ABIConverter.isShadowReturn())
- Attrs.push_back(AttributeWithIndex::get(ArgTypes.size(),
- Attribute::StructRet | Attribute::NoAlias));
-
- std::vector<const Type*> ScalarArgs;
- if (static_chain) {
- // Pass the static chain as the first parameter.
- ABIConverter.HandleArgument(TREE_TYPE(static_chain), ScalarArgs);
- // Mark it as the chain argument.
- Attrs.push_back(AttributeWithIndex::get(ArgTypes.size(),
- Attribute::Nest));
- }
-
- // If the target has regparam parameters, allow it to inspect the function
- // type.
- int local_regparam = 0;
- int local_fp_regparam = 0;
-#ifdef LLVM_TARGET_ENABLE_REGPARM
- LLVM_TARGET_INIT_REGPARM(local_regparam, local_fp_regparam, type);
-#endif // LLVM_TARGET_ENABLE_REGPARM
-
- // Keep track of whether we see a byval argument.
- bool HasByVal = false;
-
- // Check if we have a corresponding decl to inspect.
- tree DeclArgs = (decl) ? DECL_ARGUMENTS(decl) : NULL;
- // Loop over all of the arguments, adding them as we go.
- tree Args = TYPE_ARG_TYPES(type);
- for (; Args && TREE_VALUE(Args) != void_type_node; Args = TREE_CHAIN(Args)){
- tree ArgTy = TREE_VALUE(Args);
- if (!isPassedByInvisibleReference(ArgTy) &&
- ConvertType(ArgTy)->isOpaqueTy()) {
- // If we are passing an opaque struct by value, we don't know how many
- // arguments it will turn into. Because we can't handle this yet,
- // codegen the prototype as (...).
- if (CallingConv == CallingConv::C)
- ArgTypes.clear();
- else
- // Don't nuke last argument.
- ArgTypes.erase(ArgTypes.begin()+1, ArgTypes.end());
- Args = 0;
- break;
- }
-
- // Determine if there are any attributes for this param.
- Attributes PAttributes = Attribute::None;
-
- unsigned OldSize = ArgTypes.size();
-
- ABIConverter.HandleArgument(ArgTy, ScalarArgs, &PAttributes);
-
- // Compute zext/sext attributes.
- PAttributes |= HandleArgumentExtension(ArgTy);
-
- // Compute noalias attributes. If we have a decl for the function
- // inspect it for restrict qualifiers, otherwise try the argument
- // types.
- tree RestrictArgTy = (DeclArgs) ? TREE_TYPE(DeclArgs) : ArgTy;
- if (TREE_CODE(RestrictArgTy) == POINTER_TYPE ||
- TREE_CODE(RestrictArgTy) == REFERENCE_TYPE) {
- if (TYPE_RESTRICT(RestrictArgTy))
- PAttributes |= Attribute::NoAlias;
- }
-
-#ifdef LLVM_TARGET_ENABLE_REGPARM
- // Allow the target to mark this as inreg.
- if (INTEGRAL_TYPE_P(ArgTy) || POINTER_TYPE_P(ArgTy) ||
- SCALAR_FLOAT_TYPE_P(ArgTy))
- LLVM_ADJUST_REGPARM_ATTRIBUTE(PAttributes, ArgTy,
- TREE_INT_CST_LOW(TYPE_SIZE(ArgTy)),
- local_regparam, local_fp_regparam);
-#endif // LLVM_TARGET_ENABLE_REGPARM
-
- if (PAttributes != Attribute::None) {
- HasByVal |= PAttributes & Attribute::ByVal;
-
- // If the argument is split into multiple scalars, assign the
- // attributes to all scalars of the aggregate.
- for (unsigned i = OldSize + 1; i <= ArgTypes.size(); ++i) {
- Attrs.push_back(AttributeWithIndex::get(i, PAttributes));
- }
- }
-
- if (DeclArgs)
- DeclArgs = TREE_CHAIN(DeclArgs);
- }
-
- // If there is a byval argument then it is not safe to mark the function
- // 'readnone' or 'readonly': gcc permits a 'const' or 'pure' function to
- // write to struct arguments passed by value, but in LLVM this becomes a
- // write through the byval pointer argument, which LLVM does not allow for
- // readonly/readnone functions.
- if (HasByVal)
- FnAttributes &= ~(Attribute::ReadNone | Attribute::ReadOnly);
-
- if (flag_force_vararg_prototypes)
- // If forcing prototypes to be varargs, make all function types varargs
- // except those for builtin functions.
- isVarArg = decl ? !DECL_BUILT_IN(decl) : true;
- else
- // If the argument list ends with a void type node, it isn't vararg.
- isVarArg = (Args == 0);
- assert(RetTy && "Return type not specified!");
-
- if (FnAttributes != Attribute::None)
- Attrs.push_back(AttributeWithIndex::get(~0, FnAttributes));
-
- // Finally, make the function type and result attributes.
- PAL = AttrListPtr::get(Attrs.begin(), Attrs.end());
- return GetFunctionType(RetTy, ArgTypes, isVarArg);
-}
-
-//===----------------------------------------------------------------------===//
-// RECORD/Struct Conversion Routines
-//===----------------------------------------------------------------------===//
-
-/// StructTypeConversionInfo - A temporary structure that is used when
-/// translating a RECORD_TYPE to an LLVM type.
-struct StructTypeConversionInfo {
- std::vector<const Type*> Elements;
- std::vector<uint64_t> ElementOffsetInBytes;
- std::vector<uint64_t> ElementSizeInBytes;
- std::vector<bool> PaddingElement; // True if field is used for padding
- const TargetData &TD;
- unsigned GCCStructAlignmentInBytes;
- bool Packed; // True if struct is packed
- bool AllBitFields; // True if all struct fields are bit fields
- bool LastFieldStartsAtNonByteBoundry;
- unsigned ExtraBitsAvailable; // Non-zero if last field is bit field and it
- // does not use all allocated bits
-
- StructTypeConversionInfo(TargetMachine &TM, unsigned GCCAlign, bool P)
- : TD(*TM.getTargetData()), GCCStructAlignmentInBytes(GCCAlign),
- Packed(P), AllBitFields(true), LastFieldStartsAtNonByteBoundry(false),
- ExtraBitsAvailable(0) {}
-
- void lastFieldStartsAtNonByteBoundry(bool value) {
- LastFieldStartsAtNonByteBoundry = value;
- }
-
- void extraBitsAvailable (unsigned E) {
- ExtraBitsAvailable = E;
- }
-
- bool isPacked() { return Packed; }
-
- void markAsPacked() {
- Packed = true;
- }
-
- void allFieldsAreNotBitFields() {
- AllBitFields = false;
- // Next field is not a bitfield.
- LastFieldStartsAtNonByteBoundry = false;
- }
-
- unsigned getGCCStructAlignmentInBytes() const {
- return GCCStructAlignmentInBytes;
- }
-
- /// getTypeAlignment - Return the alignment of the specified type in bytes.
- ///
- unsigned getTypeAlignment(const Type *Ty) const {
- return Packed ? 1 : TD.getABITypeAlignment(Ty);
- }
-
- /// getTypeSize - Return the size of the specified type in bytes.
- ///
- uint64_t getTypeSize(const Type *Ty) const {
- return TD.getTypeAllocSize(Ty);
- }
-
- /// getLLVMType - Return the LLVM type for the specified object.
- ///
- const Type *getLLVMType() const {
- // Use Packed type if Packed is set or all struct fields are bitfields.
- // Empty struct is not packed unless packed is set.
- return StructType::get(Context, Elements,
- Packed || (!Elements.empty() && AllBitFields));
- }
-
- /// getAlignmentAsLLVMStruct - Return the alignment of this struct if it were
- /// converted to an LLVM type.
- uint64_t getAlignmentAsLLVMStruct() const {
- if (Packed || AllBitFields) return 1;
- unsigned MaxAlign = 1;
- for (unsigned i = 0, e = Elements.size(); i != e; ++i)
- MaxAlign = std::max(MaxAlign, getTypeAlignment(Elements[i]));
- return MaxAlign;
- }
-
- /// getSizeAsLLVMStruct - Return the size of this struct if it were converted
- /// to an LLVM type. This is the end of last element push an alignment pad at
- /// the end.
- uint64_t getSizeAsLLVMStruct() const {
- if (Elements.empty()) return 0;
- unsigned MaxAlign = getAlignmentAsLLVMStruct();
- uint64_t Size = ElementOffsetInBytes.back()+ElementSizeInBytes.back();
- return (Size+MaxAlign-1) & ~(MaxAlign-1);
- }
-
- // If this is a Packed struct and ExtraBitsAvailable is not zero then
- // remove Extra bytes if ExtraBitsAvailable > 8.
- void RemoveExtraBytes () {
-
- unsigned NoOfBytesToRemove = ExtraBitsAvailable/8;
-
- if (!Packed && !AllBitFields)
- return;
-
- if (NoOfBytesToRemove == 0)
- return;
-
- const Type *LastType = Elements.back();
- unsigned PadBytes = 0;
-
- if (LastType->isIntegerTy(8))
- PadBytes = 1 - NoOfBytesToRemove;
- else if (LastType->isIntegerTy(16))
- PadBytes = 2 - NoOfBytesToRemove;
- else if (LastType->isIntegerTy(32))
- PadBytes = 4 - NoOfBytesToRemove;
- else if (LastType->isIntegerTy(64))
- PadBytes = 8 - NoOfBytesToRemove;
- else
- return;
-
- assert (PadBytes > 0 && "Unable to remove extra bytes");
-
- // Update last element type and size, element offset is unchanged.
- const Type *Pad = ArrayType::get(Type::getInt8Ty(Context), PadBytes);
- unsigned OriginalSize = ElementSizeInBytes.back();
- Elements.pop_back();
- Elements.push_back(Pad);
-
- ElementSizeInBytes.pop_back();
- ElementSizeInBytes.push_back(OriginalSize - NoOfBytesToRemove);
- }
-
- /// ResizeLastElementIfOverlapsWith - If the last element in the struct
- /// includes the specified byte, remove it. Return true struct
- /// layout is sized properly. Return false if unable to handle ByteOffset.
- /// In this case caller should redo this struct as a packed structure.
- bool ResizeLastElementIfOverlapsWith(uint64_t ByteOffset, tree /*Field*/,
- const Type *Ty) {
- const Type *SavedTy = NULL;
-
- if (!Elements.empty()) {
- assert(ElementOffsetInBytes.back() <= ByteOffset &&
- "Cannot go backwards in struct");
-
- SavedTy = Elements.back();
- if (ElementOffsetInBytes.back()+ElementSizeInBytes.back() > ByteOffset) {
- // The last element overlapped with this one, remove it.
- uint64_t PoppedOffset = ElementOffsetInBytes.back();
- Elements.pop_back();
- ElementOffsetInBytes.pop_back();
- ElementSizeInBytes.pop_back();
- PaddingElement.pop_back();
- uint64_t EndOffset = getNewElementByteOffset(1);
- if (EndOffset < PoppedOffset) {
- // Make sure that some field starts at the position of the
- // field we just popped. Otherwise we might end up with a
- // gcc non-bitfield being mapped to an LLVM field with a
- // different offset.
- const Type *Pad = Type::getInt8Ty(Context);
- if (PoppedOffset != EndOffset + 1)
- Pad = ArrayType::get(Pad, PoppedOffset - EndOffset);
- addElement(Pad, EndOffset, PoppedOffset - EndOffset);
- }
- }
- }
-
- // Get the LLVM type for the field. If this field is a bitfield, use the
- // declared type, not the shrunk-to-fit type that GCC gives us in TREE_TYPE.
- unsigned ByteAlignment = getTypeAlignment(Ty);
- uint64_t NextByteOffset = getNewElementByteOffset(ByteAlignment);
- if (NextByteOffset > ByteOffset ||
- ByteAlignment > getGCCStructAlignmentInBytes()) {
- // LLVM disagrees as to where this field should go in the natural field
- // ordering. Therefore convert to a packed struct and try again.
- return false;
- }
-
- // If alignment won't round us up to the right boundary, insert explicit
- // padding.
- if (NextByteOffset < ByteOffset) {
- uint64_t CurOffset = getNewElementByteOffset(1);
- const Type *Pad = Type::getInt8Ty(Context);
- if (SavedTy && LastFieldStartsAtNonByteBoundry)
- // We want to reuse SavedType to access this bit field.
- // e.g. struct __attribute__((packed)) {
- // unsigned int A,
- // unsigned short B : 6,
- // C : 15;
- // char D; };
- // In this example, previous field is C and D is current field.
- addElement(SavedTy, CurOffset, ByteOffset - CurOffset);
- else if (ByteOffset - CurOffset != 1)
- Pad = ArrayType::get(Pad, ByteOffset - CurOffset);
- addElement(Pad, CurOffset, ByteOffset - CurOffset);
- }
- return true;
- }
-
- /// FieldNo - Remove the specified field and all of the fields that come after
- /// it.
- void RemoveFieldsAfter(unsigned FieldNo) {
- Elements.erase(Elements.begin()+FieldNo, Elements.end());
- ElementOffsetInBytes.erase(ElementOffsetInBytes.begin()+FieldNo,
- ElementOffsetInBytes.end());
- ElementSizeInBytes.erase(ElementSizeInBytes.begin()+FieldNo,
- ElementSizeInBytes.end());
- PaddingElement.erase(PaddingElement.begin()+FieldNo,
- PaddingElement.end());
- }
-
- /// getNewElementByteOffset - If we add a new element with the specified
- /// alignment, what byte offset will it land at?
- uint64_t getNewElementByteOffset(unsigned ByteAlignment) {
- if (Elements.empty()) return 0;
- uint64_t LastElementEnd =
- ElementOffsetInBytes.back() + ElementSizeInBytes.back();
-
- return (LastElementEnd+ByteAlignment-1) & ~(ByteAlignment-1);
- }
-
- /// addElement - Add an element to the structure with the specified type,
- /// offset and size.
- void addElement(const Type *Ty, uint64_t Offset, uint64_t Size,
- bool ExtraPadding = false) {
- Elements.push_back(Ty);
- ElementOffsetInBytes.push_back(Offset);
- ElementSizeInBytes.push_back(Size);
- PaddingElement.push_back(ExtraPadding);
- lastFieldStartsAtNonByteBoundry(false);
- ExtraBitsAvailable = 0;
- }
-
- /// getFieldEndOffsetInBytes - Return the byte offset of the byte immediately
- /// after the specified field. For example, if FieldNo is 0 and the field
- /// is 4 bytes in size, this will return 4.
- uint64_t getFieldEndOffsetInBytes(unsigned FieldNo) const {
- assert(FieldNo < ElementOffsetInBytes.size() && "Invalid field #!");
- return ElementOffsetInBytes[FieldNo]+ElementSizeInBytes[FieldNo];
- }
-
- /// getEndUnallocatedByte - Return the first byte that isn't allocated at the
- /// end of a structure. For example, for {}, it's 0, for {int} it is 4, for
- /// {int,short}, it is 6.
- uint64_t getEndUnallocatedByte() const {
- if (ElementOffsetInBytes.empty()) return 0;
- return getFieldEndOffsetInBytes(ElementOffsetInBytes.size()-1);
- }
-
- void addNewBitField(uint64_t Size, uint64_t Extra,
- uint64_t FirstUnallocatedByte);
-
- void dump() const;
-};
-
-// Add new element which is a bit field. Size is not the size of bit field,
-// but size of bits required to determine type of new Field which will be
-// used to access this bit field.
-// If possible, allocate a field with room for Size+Extra bits.
-void StructTypeConversionInfo::addNewBitField(uint64_t Size, uint64_t Extra,
- uint64_t FirstUnallocatedByte) {
-
- // Figure out the LLVM type that we will use for the new field.
- // Note, Size is not necessarily size of the new field. It indicates
- // additional bits required after FirstunallocatedByte to cover new field.
- const Type *NewFieldTy = 0;
-
- // First try an ABI-aligned field including (some of) the Extra bits.
- // This field must satisfy Size <= w && w <= XSize.
- uint64_t XSize = Size + Extra;
- for (unsigned w = NextPowerOf2(std::min(UINT64_C(64), XSize))/2;
- w >= Size && w >= 8; w /= 2) {
- if (TD.isIllegalInteger(w))
- continue;
- // Would a w-sized integer field be aligned here?
- const unsigned a = TD.getABIIntegerTypeAlignment(w);
- if (FirstUnallocatedByte & (a-1) || a > getGCCStructAlignmentInBytes())
- continue;
- // OK, use w-sized integer.
- NewFieldTy = IntegerType::get(Context, w);
- break;
- }
-
- // Try an integer field that holds Size bits.
- if (!NewFieldTy) {
- if (Size <= 8)
- NewFieldTy = Type::getInt8Ty(Context);
- else if (Size <= 16)
- NewFieldTy = Type::getInt16Ty(Context);
- else if (Size <= 32)
- NewFieldTy = Type::getInt32Ty(Context);
- else {
- assert(Size <= 64 && "Bitfield too large!");
- NewFieldTy = Type::getInt64Ty(Context);
- }
- }
-
- // Check that the alignment of NewFieldTy won't cause a gap in the structure!
- unsigned ByteAlignment = getTypeAlignment(NewFieldTy);
- if (FirstUnallocatedByte & (ByteAlignment-1) ||
- ByteAlignment > getGCCStructAlignmentInBytes()) {
- // Instead of inserting a nice whole field, insert a small array of ubytes.
- NewFieldTy = ArrayType::get(Type::getInt8Ty(Context), (Size+7)/8);
- }
-
- // Finally, add the new field.
- addElement(NewFieldTy, FirstUnallocatedByte, getTypeSize(NewFieldTy));
- ExtraBitsAvailable = NewFieldTy->getPrimitiveSizeInBits() - Size;
-}
-
-void StructTypeConversionInfo::dump() const {
- raw_ostream &OS = outs();
- OS << "Info has " << Elements.size() << " fields:\n";
- for (unsigned i = 0, e = Elements.size(); i != e; ++i) {
- OS << " Offset = " << ElementOffsetInBytes[i]
- << " Size = " << ElementSizeInBytes[i]
- << " Type = ";
- WriteTypeSymbolic(OS, Elements[i], TheModule);
- OS << "\n";
- }
- OS.flush();
-}
-
-std::map<tree, StructTypeConversionInfo *> StructTypeInfoMap;
-
-/// Return true if and only if field no. N from struct type T is a padding
-/// element added to match llvm struct type size and gcc struct type size.
-bool isPaddingElement(tree type, unsigned index) {
-
- StructTypeConversionInfo *Info = StructTypeInfoMap[type];
-
- // If info is not available then be conservative and return false.
- if (!Info)
- return false;
-
- assert ( Info->Elements.size() == Info->PaddingElement.size()
- && "Invalid StructTypeConversionInfo");
- assert ( index < Info->PaddingElement.size()
- && "Invalid PaddingElement index");
- return Info->PaddingElement[index];
-}
-
-/// OldTy and NewTy are union members. If they are representing
-/// structs then adjust their PaddingElement bits. Padding
-/// field in one struct may not be a padding field in another
-/// struct.
-void adjustPaddingElement(tree oldtree, tree newtree) {
-
- StructTypeConversionInfo *OldInfo = StructTypeInfoMap[oldtree];
- StructTypeConversionInfo *NewInfo = StructTypeInfoMap[newtree];
-
- if (!OldInfo || !NewInfo)
- return;
-
- /// FIXME : Find overlapping padding fields and preserve their
- /// isPaddingElement bit. For now, clear all isPaddingElement bits.
- for (unsigned i = 0, size = NewInfo->PaddingElement.size(); i != size; ++i)
- NewInfo->PaddingElement[i] = false;
-
- for (unsigned i = 0, size = OldInfo->PaddingElement.size(); i != size; ++i)
- OldInfo->PaddingElement[i] = false;
-
-}
-
-/// DecodeStructFields - This method decodes the specified field, if it is a
-/// FIELD_DECL, adding or updating the specified StructTypeConversionInfo to
-/// reflect it. Return true if field is decoded correctly. Otherwise return
-/// false.
-bool TypeConverter::DecodeStructFields(tree Field,
- StructTypeConversionInfo &Info) {
- // Handle bit-fields specially.
- if (isBitfield(Field)) {
- // If this field is forcing packed llvm struct then retry entire struct
- // layout.
- if (!Info.isPacked()) {
- // Unnamed bitfield type does not contribute in struct alignment
- // computations. Use packed llvm structure in such cases.
- if (!DECL_NAME(Field))
- return false;
- // If this field is packed then the struct may need padding fields
- // before this field.
- if (DECL_PACKED(Field))
- return false;
- // If Field has user defined alignment and it does not match Ty alignment
- // then convert to a packed struct and try again.
- if (TYPE_USER_ALIGN(TREE_TYPE(Field))) {
- const Type *Ty = ConvertType(TREE_TYPE(Field));
- if (TYPE_ALIGN(TREE_TYPE(Field)) !=
- 8 * Info.getTypeAlignment(Ty))
- return false;
- }
- }
- DecodeStructBitField(Field, Info);
- return true;
- }
-
- Info.allFieldsAreNotBitFields();
-
- // Get the starting offset in the record.
- uint64_t StartOffsetInBits = getFieldOffsetInBits(Field);
- assert((StartOffsetInBits & 7) == 0 && "Non-bit-field has non-byte offset!");
- uint64_t StartOffsetInBytes = StartOffsetInBits/8;
-
- const Type *Ty = ConvertType(TREE_TYPE(Field));
-
- // If this field is packed then the struct may need padding fields
- // before this field.
- if (DECL_PACKED(Field) && !Info.isPacked())
- return false;
- // Pop any previous elements out of the struct if they overlap with this one.
- // This can happen when the C++ front-end overlaps fields with tail padding in
- // C++ classes.
- else if (!Info.ResizeLastElementIfOverlapsWith(StartOffsetInBytes, Field, Ty)) {
- // LLVM disagrees as to where this field should go in the natural field
- // ordering. Therefore convert to a packed struct and try again.
- return false;
- }
- else if (TYPE_USER_ALIGN(TREE_TYPE(Field))
- && (unsigned)DECL_ALIGN(Field) != 8 * Info.getTypeAlignment(Ty)
- && !Info.isPacked()) {
- // If Field has user defined alignment and it does not match Ty alignment
- // then convert to a packed struct and try again.
- return false;
- } else
- // At this point, we know that adding the element will happen at the right
- // offset. Add it.
- Info.addElement(Ty, StartOffsetInBytes, Info.getTypeSize(Ty));
- return true;
-}
-
-/// DecodeStructBitField - This method decodes the specified bit-field, adding
-/// or updating the specified StructTypeConversionInfo to reflect it.
-///
-/// Note that in general, we cannot produce a good covering of struct fields for
-/// bitfields. As such, we only make sure that all bits in a struct that
-/// correspond to a bitfield are represented in the LLVM struct with
-/// (potentially multiple) integer fields of integer type. This ensures that
-/// initialized globals with bitfields can have the initializers for the
-/// bitfields specified.
-void TypeConverter::DecodeStructBitField(tree_node *Field,
- StructTypeConversionInfo &Info) {
- unsigned FieldSizeInBits = TREE_INT_CST_LOW(DECL_SIZE(Field));
-
- if (FieldSizeInBits == 0) // Ignore 'int:0', which just affects layout.
- return;
-
- // Get the starting offset in the record.
- uint64_t StartOffsetInBits = getFieldOffsetInBits(Field);
- uint64_t EndBitOffset = FieldSizeInBits+StartOffsetInBits;
-
- // If the last inserted LLVM field completely contains this bitfield, just
- // ignore this field.
- if (!Info.Elements.empty()) {
- uint64_t LastFieldBitOffset = Info.ElementOffsetInBytes.back()*8;
- unsigned LastFieldBitSize = Info.ElementSizeInBytes.back()*8;
- assert(LastFieldBitOffset <= StartOffsetInBits &&
- "This bitfield isn't part of the last field!");
- if (EndBitOffset <= LastFieldBitOffset+LastFieldBitSize &&
- LastFieldBitOffset+LastFieldBitSize >= StartOffsetInBits) {
- // Already contained in previous field. Update remaining extra bits that
- // are available.
- Info.extraBitsAvailable(Info.getEndUnallocatedByte()*8 - EndBitOffset);
- return;
- }
- }
-
- // Otherwise, this bitfield lives (potentially) partially in the preceeding
- // field and in fields that exist after it. Add integer-typed fields to the
- // LLVM struct such that there are no holes in the struct where the bitfield
- // is: these holes would make it impossible to statically initialize a global
- // of this type that has an initializer for the bitfield.
-
- // We want the integer-typed fields as large as possible up to the machine
- // word size. If there are more bitfields following this one, try to include
- // them in the same field.
-
- // Calculate the total number of bits in the continuous group of bitfields
- // following this one. This is the number of bits that addNewBitField should
- // try to include.
- unsigned ExtraSizeInBits = 0;
- tree LastBitField = 0;
- for (tree f = TREE_CHAIN(Field); f; f = TREE_CHAIN(f)) {
- if (TREE_CODE(f) != FIELD_DECL ||
- TREE_CODE(DECL_FIELD_OFFSET(f)) != INTEGER_CST)
- break;
- if (isBitfield(f))
- LastBitField = f;
- else {
- // We can use all this bits up to the next non-bitfield.
- LastBitField = 0;
- ExtraSizeInBits = getFieldOffsetInBits(f) - EndBitOffset;
- break;
- }
- }
- // Record ended in a bitfield? Use all of the last byte.
- if (LastBitField)
- ExtraSizeInBits = RoundUpToAlignment(getFieldOffsetInBits(LastBitField) +
- TREE_INT_CST_LOW(DECL_SIZE(LastBitField)), 8) - EndBitOffset;
-
- // Compute the number of bits that we need to add to this struct to cover
- // this field.
- uint64_t FirstUnallocatedByte = Info.getEndUnallocatedByte();
- uint64_t StartOffsetFromByteBoundry = StartOffsetInBits & 7;
-
- if (StartOffsetInBits < FirstUnallocatedByte*8) {
-
- uint64_t AvailableBits = FirstUnallocatedByte * 8 - StartOffsetInBits;
- // This field's starting point is already allocated.
- if (StartOffsetFromByteBoundry == 0) {
- // This field starts at byte boundry. Need to allocate space
- // for additional bytes not yet allocated.
- unsigned NumBitsToAdd = FieldSizeInBits - AvailableBits;
- Info.addNewBitField(NumBitsToAdd, ExtraSizeInBits, FirstUnallocatedByte);
- return;
- }
-
- // Otherwise, this field's starting point is inside previously used byte.
- // This happens with Packed bit fields. In this case one LLVM Field is
- // used to access previous field and current field.
- unsigned prevFieldTypeSizeInBits =
- Info.ElementSizeInBytes[Info.Elements.size() - 1] * 8;
-
- unsigned NumBitsRequired = prevFieldTypeSizeInBits
- + (FieldSizeInBits - AvailableBits);
-
- if (NumBitsRequired > 64) {
- // Use bits from previous field.
- NumBitsRequired = FieldSizeInBits - AvailableBits;
- } else {
- // If type used to access previous field is not large enough then
- // remove previous field and insert new field that is large enough to
- // hold both fields.
- Info.RemoveFieldsAfter(Info.Elements.size() - 1);
- for (unsigned idx = 0; idx < (prevFieldTypeSizeInBits/8); ++idx)
- FirstUnallocatedByte--;
- }
- Info.addNewBitField(NumBitsRequired, ExtraSizeInBits, FirstUnallocatedByte);
- // Do this after adding Field.
- Info.lastFieldStartsAtNonByteBoundry(true);
- return;
- }
-
- if (StartOffsetInBits > FirstUnallocatedByte*8) {
- // If there is padding between the last field and the struct, insert
- // explicit bytes into the field to represent it.
- unsigned PadBytes = 0;
- unsigned PadBits = 0;
- if (StartOffsetFromByteBoundry != 0) {
- // New field does not start at byte boundry.
- PadBits = StartOffsetInBits - (FirstUnallocatedByte*8);
- PadBytes = PadBits/8;
- PadBits = PadBits - PadBytes*8;
- } else
- PadBytes = StartOffsetInBits/8-FirstUnallocatedByte;
-
- if (PadBytes) {
- const Type *Pad = Type::getInt8Ty(Context);
- if (PadBytes != 1)
- Pad = ArrayType::get(Pad, PadBytes);
- Info.addElement(Pad, FirstUnallocatedByte, PadBytes);
- }
-
- FirstUnallocatedByte = StartOffsetInBits/8;
- // This field will use some of the bits from this PadBytes, if
- // starting offset is not at byte boundry.
- if (StartOffsetFromByteBoundry != 0)
- FieldSizeInBits += PadBits;
- }
-
- // Now, Field starts at FirstUnallocatedByte and everything is aligned.
- Info.addNewBitField(FieldSizeInBits, ExtraSizeInBits, FirstUnallocatedByte);
-}
-
-/// UnionHasOnlyZeroOffsets - Check if a union type has only members with
-/// offsets that are zero, e.g., no Fortran equivalences.
-static bool UnionHasOnlyZeroOffsets(tree type) {
- for (tree Field = TYPE_FIELDS(type); Field; Field = TREE_CHAIN(Field)) {
- if (TREE_CODE(Field) != FIELD_DECL) continue;
- if (!OffsetIsLLVMCompatible(Field))
- return false;
- if (getFieldOffsetInBits(Field) != 0)
- return false;
- }
- return true;
-}
-
-/// SelectUnionMember - Find the union member with the largest aligment. If
-/// there are multiple types with the same alignment, select the one with
-/// the largest size. If the type with max. align is smaller than other types,
-/// then we will add padding later on anyway to match union size.
-void TypeConverter::SelectUnionMember(tree type,
- StructTypeConversionInfo &Info) {
- bool FindBiggest = TREE_CODE(type) != QUAL_UNION_TYPE;
-
- const Type *UnionTy = 0;
- tree GccUnionTy = 0;
- tree UnionField = 0;
- unsigned MinAlign = ~0U;
- uint64_t BestSize = FindBiggest ? 0 : ~(uint64_t)0;
- for (tree Field = TYPE_FIELDS(type); Field; Field = TREE_CHAIN(Field)) {
- if (TREE_CODE(Field) != FIELD_DECL) continue;
- assert(DECL_FIELD_OFFSET(Field) && integer_zerop(DECL_FIELD_OFFSET(Field))
- && "Union with non-zero offset?");
-
- // Skip fields that are known not to be present.
- if (TREE_CODE(type) == QUAL_UNION_TYPE &&
- integer_zerop(DECL_QUALIFIER(Field)))
- continue;
-
- tree TheGccTy = TREE_TYPE(Field);
-
- // Skip zero-length bitfields. These are only used for setting the
- // alignment.
- if (DECL_BIT_FIELD(Field) && DECL_SIZE(Field) &&
- integer_zerop(DECL_SIZE(Field)))
- continue;
-
- const Type *TheTy = ConvertType(TheGccTy);
- unsigned Align = Info.getTypeAlignment(TheTy);
- uint64_t Size = Info.getTypeSize(TheTy);
-
- adjustPaddingElement(GccUnionTy, TheGccTy);
-
- // Select TheTy as union type if it is the biggest/smallest field (depending
- // on the value of FindBiggest). If more than one field achieves this size
- // then choose the least aligned.
- if ((Size == BestSize && Align < MinAlign) ||
- (FindBiggest && Size > BestSize) ||
- (!FindBiggest && Size < BestSize)) {
- UnionTy = TheTy;
- UnionField = Field;
- GccUnionTy = TheGccTy;
- BestSize = Size;
- MinAlign = Align;
- }
-
- // Skip remaining fields if this one is known to be present.
- if (TREE_CODE(type) == QUAL_UNION_TYPE &&
- integer_onep(DECL_QUALIFIER(Field)))
- break;
- }
-
- if (UnionTy) { // Not an empty union.
- if (8 * Info.getTypeAlignment(UnionTy) > TYPE_ALIGN(type))
- Info.markAsPacked();
-
- if (isBitfield(UnionField)) {
- unsigned FieldSizeInBits = TREE_INT_CST_LOW(DECL_SIZE(UnionField));
- Info.addNewBitField(FieldSizeInBits, 0, 0);
- } else {
- Info.allFieldsAreNotBitFields();
- Info.addElement(UnionTy, 0, Info.getTypeSize(UnionTy));
- }
- }
-}
-
-/// ConvertRECORD - Convert a RECORD_TYPE, UNION_TYPE or QUAL_UNION_TYPE to
-/// an LLVM type.
-// A note on C++ virtual base class layout. Consider the following example:
-// class A { public: int i0; };
-// class B : public virtual A { public: int i1; };
-// class C : public virtual A { public: int i2; };
-// class D : public virtual B, public virtual C { public: int i3; };
-//
-// The TYPE nodes gcc builds for classes represent that class as it looks
-// standing alone. Thus B is size 12 and looks like { vptr; i2; baseclass A; }
-// However, this is not the layout used when that class is a base class for
-// some other class, yet the same TYPE node is still used. D in the above has
-// both a BINFO list entry and a FIELD that reference type B, but the virtual
-// base class A within B is not allocated in that case; B-within-D is only
-// size 8. The correct size is in the FIELD node (does not match the size
-// in its child TYPE node.) The fields to be omitted from the child TYPE,
-// as far as I can tell, are always the last ones; but also, there is a
-// TYPE_DECL node sitting in the middle of the FIELD list separating virtual
-// base classes from everything else.
-//
-// Similarly, a nonvirtual base class which has virtual base classes might
-// not contain those virtual base classes when used as a nonvirtual base class.
-// There is seemingly no way to detect this except for the size differential.
-//
-// For LLVM purposes, we build a new type for B-within-D that
-// has the correct size and layout for that usage.
-
-const Type *TypeConverter::ConvertRECORD(tree type) {
- if (const Type *Ty = GET_TYPE_LLVM(type)) {
- // If we already compiled this type, and if it was not a forward
- // definition that is now defined, use the old type.
- if (!Ty->isOpaqueTy() || TYPE_SIZE(type) == 0)
- return Ty;
- }
-
- if (TYPE_SIZE(type) == 0) { // Forward declaration?
- const Type *Ty = OpaqueType::get(Context);
- return TypeDB.setType(type, Ty);
- }
-
- // Note that we are compiling a struct now.
- bool OldConvertingStruct = ConvertingStruct;
- ConvertingStruct = true;
-
- // Record those fields which will be converted to LLVM fields.
- SmallVector<std::pair<tree, uint64_t>, 32> Fields;
- for (tree Field = TYPE_FIELDS(type); Field; Field = TREE_CHAIN(Field))
- if (TREE_CODE(Field) == FIELD_DECL && OffsetIsLLVMCompatible(Field))
- Fields.push_back(std::make_pair(Field, getFieldOffsetInBits(Field)));
-
- // The fields are almost always sorted, but occasionally not. Sort them by
- // field offset.
- for (unsigned i = 1, e = Fields.size(); i < e; i++)
- for (unsigned j = i; j && Fields[j].second < Fields[j-1].second; j--)
- std::swap(Fields[j], Fields[j-1]);
-
- StructTypeConversionInfo *Info =
- new StructTypeConversionInfo(*TheTarget, TYPE_ALIGN(type) / 8,
- TYPE_PACKED(type));
-
- // Convert over all of the elements of the struct.
- // Workaround to get Fortran EQUIVALENCE working.
- // TODO: Unify record and union logic and handle this optimally.
- bool HasOnlyZeroOffsets = TREE_CODE(type) != RECORD_TYPE &&
- UnionHasOnlyZeroOffsets(type);
- if (HasOnlyZeroOffsets) {
- SelectUnionMember(type, *Info);
- } else {
- // Convert over all of the elements of the struct.
- bool retryAsPackedStruct = false;
- for (unsigned i = 0, e = Fields.size(); i < e; i++)
- if (DecodeStructFields(Fields[i].first, *Info) == false) {
- retryAsPackedStruct = true;
- break;
- }
-
- if (retryAsPackedStruct) {
- delete Info;
- Info = new StructTypeConversionInfo(*TheTarget, TYPE_ALIGN(type) / 8,
- true);
- for (unsigned i = 0, e = Fields.size(); i < e; i++)
- if (DecodeStructFields(Fields[i].first, *Info) == false) {
- assert(0 && "Unable to decode struct fields.");
- }
- }
- }
-
- // Insert tail padding if the LLVM struct requires explicit tail padding to
- // be the same size as the GCC struct or union. This handles, e.g., "{}" in
- // C++, and cases where a union has larger alignment than the largest member
- // does.
- if (TYPE_SIZE(type) && TREE_CODE(TYPE_SIZE(type)) == INTEGER_CST) {
- uint64_t GCCTypeSize = getInt64(TYPE_SIZE_UNIT(type), true);
- uint64_t LLVMStructSize = Info->getSizeAsLLVMStruct();
-
- if (LLVMStructSize > GCCTypeSize) {
- Info->RemoveExtraBytes();
- LLVMStructSize = Info->getSizeAsLLVMStruct();
- }
-
- if (LLVMStructSize != GCCTypeSize) {
- assert(LLVMStructSize < GCCTypeSize &&
- "LLVM type size doesn't match GCC type size!");
- uint64_t LLVMLastElementEnd = Info->getNewElementByteOffset(1);
-
- // If only one byte is needed then insert i8.
- if (GCCTypeSize-LLVMLastElementEnd == 1)
- Info->addElement(Type::getInt8Ty(Context), 1, 1);
- else {
- if (((GCCTypeSize-LLVMStructSize) % 4) == 0 &&
- (Info->getAlignmentAsLLVMStruct() %
- Info->getTypeAlignment(Type::getInt32Ty(Context))) == 0) {
- // Insert array of i32.
- unsigned Int32ArraySize = (GCCTypeSize-LLVMStructSize) / 4;
- const Type *PadTy =
- ArrayType::get(Type::getInt32Ty(Context), Int32ArraySize);
- Info->addElement(PadTy, GCCTypeSize - LLVMLastElementEnd,
- Int32ArraySize, true /* Padding Element */);
- } else {
- const Type *PadTy = ArrayType::get(Type::getInt8Ty(Context),
- GCCTypeSize-LLVMStructSize);
- Info->addElement(PadTy, GCCTypeSize - LLVMLastElementEnd,
- GCCTypeSize - LLVMLastElementEnd,
- true /* Padding Element */);
- }
- }
- }
- } else
- Info->RemoveExtraBytes();
-
- const Type *ResultTy = Info->getLLVMType();
- StructTypeInfoMap[type] = Info;
-
- const OpaqueType *OldTy = cast_or_null<OpaqueType>(GET_TYPE_LLVM(type));
- TypeDB.setType(type, ResultTy);
-
- // If there was a forward declaration for this type that is now resolved,
- // refine anything that used it to the new type.
- if (OldTy)
- const_cast<OpaqueType*>(OldTy)->refineAbstractTypeTo(ResultTy);
-
- // We have finished converting this struct. See if the is the outer-most
- // struct or union being converted by ConvertType.
- ConvertingStruct = OldConvertingStruct;
- if (!ConvertingStruct) {
-
- // If this is the outer-most level of structness, resolve any pointers
- // that were deferred.
- while (!PointersToReresolve.empty()) {
- if (tree PtrTy = PointersToReresolve.back()) {
- ConvertType(PtrTy); // Reresolve this pointer type.
- assert((PointersToReresolve.empty() ||
- PointersToReresolve.back() != PtrTy) &&
- "Something went wrong with pointer resolution!");
- } else {
- // Null marker element.
- PointersToReresolve.pop_back();
- }
- }
- }
-
- return GET_TYPE_LLVM(type);
-}
Removed: dragonegg/trunk/unknown/llvm-os.h
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/unknown/llvm-os.h?rev=127407&view=auto
==============================================================================
--- dragonegg/trunk/unknown/llvm-os.h (original)
+++ dragonegg/trunk/unknown/llvm-os.h (removed)
@@ -1 +0,0 @@
-#error Unknown target operating system
Removed: dragonegg/trunk/unknown/llvm-target.cpp
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/unknown/llvm-target.cpp?rev=127407&view=auto
==============================================================================
--- dragonegg/trunk/unknown/llvm-target.cpp (original)
+++ dragonegg/trunk/unknown/llvm-target.cpp (removed)
@@ -1 +0,0 @@
-#error Unknown target architecture
Copied: dragonegg/trunk/utils/TargetInfo.cpp (from r127403, dragonegg/trunk/utils/target.cpp)
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/utils/TargetInfo.cpp?p2=dragonegg/trunk/utils/TargetInfo.cpp&p1=dragonegg/trunk/utils/target.cpp&r1=127403&r2=127408&rev=127408&view=diff
==============================================================================
--- dragonegg/trunk/utils/target.cpp (original)
+++ dragonegg/trunk/utils/TargetInfo.cpp Thu Mar 10 10:05:54 2011
@@ -1,4 +1,4 @@
-//===---- llvm-target.cpp - Utility for getting info about the target -----===//
+//===----- TargetInfo.cpp - Utility for getting info about the target -----===//
//
// Copyright (C) 2009, 2010, 2011 Duncan Sands.
//
Removed: dragonegg/trunk/utils/target.cpp
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/utils/target.cpp?rev=127407&view=auto
==============================================================================
--- dragonegg/trunk/utils/target.cpp (original)
+++ dragonegg/trunk/utils/target.cpp (removed)
@@ -1,80 +0,0 @@
-//===---- llvm-target.cpp - Utility for getting info about the target -----===//
-//
-// Copyright (C) 2009, 2010, 2011 Duncan Sands.
-//
-// This file is part of DragonEgg.
-//
-// DragonEgg is free software; you can redistribute it and/or modify it under
-// the terms of the GNU General Public License as published by the Free Software
-// Foundation; either version 2, or (at your option) any later version.
-//
-// DragonEgg is distributed in the hope that it will be useful, but WITHOUT ANY
-// WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
-// A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-//
-// You should have received a copy of the GNU General Public License along with
-// DragonEgg; see the file COPYING. If not, write to the Free Software
-// Foundation, 51 Franklin Street, Suite 500, Boston, MA 02110-1335, USA.
-//
-//===----------------------------------------------------------------------===//
-// Utility program for getting information about the system that GCC targets.
-//===----------------------------------------------------------------------===//
-
-#include <llvm/ADT/Triple.h>
-#include <iostream>
-
-using namespace llvm;
-
-static void PrintTriple(Triple &T) {
- std::cout << T.getTriple() << "\n";
-}
-static void PrintArchName(Triple &T) {
- std::cout << T.getArchTypeName(T.getArch()) << "\n";
-}
-static void PrintVendorName(Triple &T) {
- std::cout << T.getVendorTypeName(T.getVendor()) << "\n";
-}
-static void PrintOSName(Triple &T) {
- std::cout << T.getOSTypeName(T.getOS()) << "\n";
-}
-static void PrintArchTypePrefix(Triple &T) {
- std::cout << T.getArchTypePrefix(T.getArch()) << "\n";
-}
-
-struct Option {
- const char *Name;
- void (*Action)(Triple &);
-};
-
-static Option Options[] = {
- { "-t", PrintTriple },
- { "-a", PrintArchName },
- { "-v", PrintVendorName },
- { "-o", PrintOSName },
- { "-p", PrintArchTypePrefix },
- { NULL, NULL }
-};
-
-int main(int argc, char **argv) {
- Triple T(Triple::normalize(TARGET_TRIPLE));
-
- for (int i = 1; i < argc; ++i) {
- bool Found = false;
- for (Option *O = Options; O->Name; ++O)
- if (!strcmp(argv[i], O->Name)) {
- Found = true;
- O->Action(T);
- break;
- }
- if (!Found) {
- std::cerr << "Unknown option \"" << argv[i] << "\"\n";
- std::cerr << "Usage: " << argv[0];
- for (Option *O = Options; O->Name; ++O)
- std::cerr << " " << O->Name;
- std::cerr << "\n";
- return 1;
- }
- }
-
- return 0;
-}
Copied: dragonegg/trunk/x86/Target.cpp (from r127403, dragonegg/trunk/x86/llvm-target.cpp)
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/x86/Target.cpp?p2=dragonegg/trunk/x86/Target.cpp&p1=dragonegg/trunk/x86/llvm-target.cpp&r1=127403&r2=127408&rev=127408&view=diff
==============================================================================
--- dragonegg/trunk/x86/llvm-target.cpp (original)
+++ dragonegg/trunk/x86/Target.cpp Thu Mar 10 10:05:54 2011
@@ -1,4 +1,4 @@
-//===------------ llvm-target.cpp - Implements the IA-32 ABI. -------------===//
+//===--------------- Target.cpp - Implements the IA-32 ABI. ---------------===//
//
// Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 Evan Cheng,
// Duncan Sands et al.
@@ -22,8 +22,8 @@
//===----------------------------------------------------------------------===//
// Plugin headers
-#include "llvm-abi.h"
-#include "llvm-target.h"
+#include "ABI.h"
+#include "Target.h"
// LLVM headers
#include "llvm/Module.h"
Copied: dragonegg/trunk/x86/Target.h (from r127403, dragonegg/trunk/x86/llvm-target.h)
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/x86/Target.h?p2=dragonegg/trunk/x86/Target.h&p1=dragonegg/trunk/x86/llvm-target.h&r1=127403&r2=127408&rev=127408&view=diff
==============================================================================
--- dragonegg/trunk/x86/llvm-target.h (original)
+++ dragonegg/trunk/x86/Target.h Thu Mar 10 10:05:54 2011
@@ -1,4 +1,4 @@
-//==-- llvm-target.h - Target hooks for GCC to LLVM conversion ---*- C++ -*-==//
+//==----- Target.h - Target hooks for GCC to LLVM conversion -----*- C++ -*-==//
//
// Copyright (C) 2007, 2008, 2009, 2010, 2011 Anton Korobeynikov, Duncan Sands
// et al.
@@ -21,8 +21,8 @@
// This file declares some target-specific hooks for GCC to LLVM conversion.
//===----------------------------------------------------------------------===//
-#ifndef LLVM_TARGET_H
-#define LLVM_TARGET_H
+#ifndef DRAGONEGG_TARGET_H
+#define DRAGONEGG_TARGET_H
/* LLVM specific stuff for supporting calling convention output */
#define TARGET_ADJUST_LLVM_CC(CC, type) \
@@ -96,7 +96,7 @@
if (TARGET_64BIT && TARGET_NO_RED_ZONE) \
disable_red_zone = 1;
-#ifdef LLVM_ABI_H
+#ifdef DRAGONEGG_ABI_H
/* On x86-32 objects containing SSE vectors are 16 byte aligned, everything
else 4. On x86-64 vectors are 8-byte aligned, everything else can
@@ -225,7 +225,7 @@
llvm_x86_64_aggregate_partially_passed_in_regs((E), (SE), (ISR)) : \
false)
-#endif /* LLVM_ABI_H */
+#endif /* DRAGONEGG_ABI_H */
/* Register class used for passing given 64bit part of the argument.
These represent classes as documented by the PS ABI, with the exception
@@ -400,4 +400,4 @@
argvec.push_back("-force-align-stack"); \
} while (0)
-#endif /* LLVM_TARGET_H */
+#endif /* DRAGONEGG_TARGET_H */
Removed: dragonegg/trunk/x86/llvm-target.cpp
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/x86/llvm-target.cpp?rev=127407&view=auto
==============================================================================
--- dragonegg/trunk/x86/llvm-target.cpp (original)
+++ dragonegg/trunk/x86/llvm-target.cpp (removed)
@@ -1,1646 +0,0 @@
-//===------------ llvm-target.cpp - Implements the IA-32 ABI. -------------===//
-//
-// Copyright (C) 2005, 2006, 2007, 2008, 2009, 2010, 2011 Evan Cheng,
-// Duncan Sands et al.
-//
-// This file is part of DragonEgg.
-//
-// DragonEgg is free software; you can redistribute it and/or modify it under
-// the terms of the GNU General Public License as published by the Free Software
-// Foundation; either version 2, or (at your option) any later version.
-//
-// DragonEgg is distributed in the hope that it will be useful, but WITHOUT ANY
-// WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
-// A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-//
-// You should have received a copy of the GNU General Public License along with
-// DragonEgg; see the file COPYING. If not, write to the Free Software
-// Foundation, 51 Franklin Street, Suite 500, Boston, MA 02110-1335, USA.
-//
-//===----------------------------------------------------------------------===//
-// This file implements specific LLVM IA-32 ABI.
-//===----------------------------------------------------------------------===//
-
-// Plugin headers
-#include "llvm-abi.h"
-#include "llvm-target.h"
-
-// LLVM headers
-#include "llvm/Module.h"
-#include "llvm/Support/ErrorHandling.h"
-
-// System headers
-#include <gmp.h>
-
-// GCC headers
-extern "C" {
-#include "config.h"
-// Stop GCC declaring 'getopt' as it can clash with the system's declaration.
-#undef HAVE_DECL_GETOPT
-#include "system.h"
-#include "coretypes.h"
-#include "target.h"
-#include "tree.h"
-
-#include "gimple.h"
-#include "toplev.h"
-}
-
-static LLVMContext &Context = getGlobalContext();
-
-/// BitCastToIntVector - Bitcast the vector operand to a vector of integers of
-// the same length.
-static Value *BitCastToIntVector(Value *Op, LLVMBuilder &Builder) {
- const VectorType *VecTy = cast<VectorType>(Op->getType());
- const Type *EltTy = VecTy->getElementType();
- const Type *IntTy = IntegerType::get(Context,EltTy->getPrimitiveSizeInBits());
- return Builder.CreateBitCast(Op, VectorType::get(IntTy,
- VecTy->getNumElements()));
-}
-
-/// BuiltinCode - A enumerated type with one value for each supported builtin.
-enum BuiltinCode {
- SearchForHandler, // Builtin not seen before - search for a handler.
-#define DEFINE_BUILTIN(x) x
-#include "x86_builtins"
-#undef DEFINE_BUILTIN
- , UnsupportedBuiltin // There is no handler for this builtin.
-};
-
-struct HandlerEntry {
- const char *Name; BuiltinCode Handler;
-};
-
-static bool LT(const HandlerEntry &E, const HandlerEntry &F) {
- return strcmp(E.Name, F.Name) < 0;
-}
-
-/* TargetIntrinsicLower - For builtins that we want to expand to normal LLVM
- * code, emit the code now. If we can handle the code, this macro should emit
- * the code, return true.
- */
-bool TreeToLLVM::TargetIntrinsicLower(gimple stmt,
- tree fndecl,
- const MemRef * /*DestLoc*/,
- Value *&Result,
- const Type *ResultType,
- std::vector<Value*> &Ops) {
- // DECL_FUNCTION_CODE contains a value of the enumerated type ix86_builtins,
- // declared in i386.c. If this type was visible to us then we could simply
- // use a switch statement on DECL_FUNCTION_CODE to jump to the right code for
- // handling the builtin. But the type isn't visible, so instead we generate
- // at run-time a map from the values of DECL_FUNCTION_CODE to values of the
- // enumerated type BuiltinCode (defined above), the analog of ix86_builtins,
- // and do the switch on the BuiltinCode value instead.
-
- // The map from DECL_FUNCTION_CODE values to BuiltinCode.
- static std::vector<BuiltinCode> FunctionCodeMap;
- if (FunctionCodeMap.size() <= DECL_FUNCTION_CODE(fndecl))
- FunctionCodeMap.resize(DECL_FUNCTION_CODE(fndecl) + 1);
-
- // See if we already associated a BuiltinCode with this DECL_FUNCTION_CODE.
- BuiltinCode &Handler = FunctionCodeMap[DECL_FUNCTION_CODE(fndecl)];
- if (Handler == SearchForHandler) {
- // No associated BuiltinCode. Work out what value to use based on the
- // builtin's name.
-
- // List of builtin names (w/o '__builtin_ia32_') and associated BuiltinCode.
- static const HandlerEntry Handlers[] = {
-#define DEFINE_BUILTIN(x) {#x, x}
-#include "x86_builtins"
-#undef DEFINE_BUILTIN
- };
- size_t N = sizeof(Handlers) / sizeof(Handlers[0]);
-#ifndef NDEBUG
- // Check that the list of handlers is sorted by name.
- static bool Checked = false;
- if (!Checked) {
- for (unsigned i = 1; i < N; ++i)
- assert(LT(Handlers[i-1], Handlers[i]) && "Handlers not sorted!");
- Checked = true;
- }
-#endif
-
- Handler = UnsupportedBuiltin;
- const char *Identifier = IDENTIFIER_POINTER(DECL_NAME(fndecl));
- // All builtins handled here have a name starting with __builtin_ia32_.
- if (!strncmp(Identifier, "__builtin_ia32_", 15)) {
- HandlerEntry ToFind = { Identifier + 15, SearchForHandler };
- const HandlerEntry *E = std::lower_bound(Handlers, Handlers + N, ToFind, LT);
- if ((E < Handlers + N) && !strcmp(E->Name, ToFind.Name))
- Handler = E->Handler;
- }
- }
-
- bool flip = false;
- unsigned PredCode;
-
- switch (Handler) {
- case SearchForHandler:
- assert(false && "Unexpected builtin code!");
- case UnsupportedBuiltin: return false;
- case addps:
- case addps256:
- case addpd:
- case addpd256:
- Result = Builder.CreateFAdd(Ops[0], Ops[1]);
- return true;
- case paddb:
- case paddw:
- case paddd:
- case paddq:
- case paddb128:
- case paddw128:
- case paddd128:
- case paddq128:
- Result = Builder.CreateAdd(Ops[0], Ops[1]);
- return true;
- case subps:
- case subps256:
- case subpd:
- case subpd256:
- Result = Builder.CreateFSub(Ops[0], Ops[1]);
- return true;
- case psubb:
- case psubw:
- case psubd:
- case psubq:
- case psubb128:
- case psubw128:
- case psubd128:
- case psubq128:
- Result = Builder.CreateSub(Ops[0], Ops[1]);
- return true;
- case mulps:
- case mulps256:
- case mulpd:
- case mulpd256:
- Result = Builder.CreateFMul(Ops[0], Ops[1]);
- return true;
- case pmullw:
- case pmullw128:
- case pmulld128:
- Result = Builder.CreateMul(Ops[0], Ops[1]);
- return true;
- case divps:
- case divps256:
- case divpd:
- case divpd256:
- Result = Builder.CreateFDiv(Ops[0], Ops[1]);
- return true;
- case pand:
- case pand128:
- Result = Builder.CreateAnd(Ops[0], Ops[1]);
- return true;
- case pandn:
- case pandn128:
- Ops[0] = Builder.CreateNot(Ops[0]);
- Result = Builder.CreateAnd(Ops[0], Ops[1]);
- return true;
- case por:
- case por128:
- Result = Builder.CreateOr(Ops[0], Ops[1]);
- return true;
- case pxor:
- case pxor128:
- Result = Builder.CreateXor(Ops[0], Ops[1]);
- return true;
- case andps:
- case andps256:
- case andpd:
- case andpd256:
- Ops[0] = BitCastToIntVector(Ops[0], Builder);
- Ops[1] = Builder.CreateBitCast(Ops[1], Ops[0]->getType());
- Result = Builder.CreateAnd(Ops[0], Ops[1]);
- Result = Builder.CreateBitCast(Result, ResultType);
- return true;
- case orps:
- case orps256:
- case orpd:
- case orpd256:
- Ops[0] = BitCastToIntVector(Ops[0], Builder);
- Ops[1] = Builder.CreateBitCast(Ops[1], Ops[0]->getType());
- Result = Builder.CreateOr(Ops[0], Ops[1]);
- Result = Builder.CreateBitCast(Result, ResultType);
- return true;
- case xorps:
- case xorps256:
- case xorpd:
- case xorpd256:
- Ops[0] = BitCastToIntVector(Ops[0], Builder);
- Ops[1] = Builder.CreateBitCast(Ops[1], Ops[0]->getType());
- Result = Builder.CreateXor(Ops[0], Ops[1]);
- Result = Builder.CreateBitCast(Result, ResultType);
- return true;
- case andnps:
- case andnps256:
- case andnpd:
- case andnpd256:
- Ops[0] = BitCastToIntVector(Ops[0], Builder);
- Ops[1] = Builder.CreateBitCast(Ops[1], Ops[0]->getType());
- Ops[0] = Builder.CreateNot(Ops[0]);
- Result = Builder.CreateAnd(Ops[0], Ops[1]);
- Result = Builder.CreateBitCast(Result, ResultType);
- return true;
- case shufps:
- if (ConstantInt *Elt = dyn_cast<ConstantInt>(Ops[2])) {
- int EV = Elt->getZExtValue();
- Result = BuildVectorShuffle(Ops[0], Ops[1],
- ((EV & 0x03) >> 0), ((EV & 0x0c) >> 2),
- ((EV & 0x30) >> 4)+4, ((EV & 0xc0) >> 6)+4);
- } else {
- error_at(gimple_location(stmt), "mask must be an immediate");
- Result = Ops[0];
- }
- return true;
- case shufpd:
- if (ConstantInt *Elt = dyn_cast<ConstantInt>(Ops[2])) {
- int EV = Elt->getZExtValue();
- Result = BuildVectorShuffle(Ops[0], Ops[1],
- ((EV & 0x01) >> 0), ((EV & 0x02) >> 1)+2);
- } else {
- error_at(gimple_location(stmt), "mask must be an immediate");
- Result = Ops[0];
- }
- return true;
- case pshufw:
- case pshufd:
- if (ConstantInt *Elt = dyn_cast<ConstantInt>(Ops[1])) {
- int EV = Elt->getZExtValue();
- Result = BuildVectorShuffle(Ops[0], Ops[0],
- ((EV & 0x03) >> 0), ((EV & 0x0c) >> 2),
- ((EV & 0x30) >> 4), ((EV & 0xc0) >> 6));
- } else {
- error_at(gimple_location(stmt), "mask must be an immediate");
- Result = Ops[0];
- }
- return true;
- case pshufhw:
- if (ConstantInt *Elt = dyn_cast<ConstantInt>(Ops[1])) {
- int EV = Elt->getZExtValue();
- Result = BuildVectorShuffle(Ops[0], Ops[0],
- 0, 1, 2, 3,
- ((EV & 0x03) >> 0)+4, ((EV & 0x0c) >> 2)+4,
- ((EV & 0x30) >> 4)+4, ((EV & 0xc0) >> 6)+4);
- return true;
- }
- return false;
- case pshuflw:
- if (ConstantInt *Elt = dyn_cast<ConstantInt>(Ops[1])) {
- int EV = Elt->getZExtValue();
- Result = BuildVectorShuffle(Ops[0], Ops[0],
- ((EV & 0x03) >> 0), ((EV & 0x0c) >> 2),
- ((EV & 0x30) >> 4), ((EV & 0xc0) >> 6),
- 4, 5, 6, 7);
- } else {
- error_at(gimple_location(stmt), "mask must be an immediate");
- Result = Ops[0];
- }
-
- return true;
- case punpckhbw:
- Result = BuildVectorShuffle(Ops[0], Ops[1], 4, 12, 5, 13,
- 6, 14, 7, 15);
- return true;
- case punpckhwd:
- Result = BuildVectorShuffle(Ops[0], Ops[1], 2, 6, 3, 7);
- return true;
- case punpckhdq:
- Result = BuildVectorShuffle(Ops[0], Ops[1], 1, 3);
- return true;
- case punpcklbw:
- Result = BuildVectorShuffle(Ops[0], Ops[1], 0, 8, 1, 9,
- 2, 10, 3, 11);
- return true;
- case punpcklwd:
- Result = BuildVectorShuffle(Ops[0], Ops[1], 0, 4, 1, 5);
- return true;
- case punpckldq:
- Result = BuildVectorShuffle(Ops[0], Ops[1], 0, 2);
- return true;
- case punpckhbw128:
- Result = BuildVectorShuffle(Ops[0], Ops[1], 8, 24, 9, 25,
- 10, 26, 11, 27,
- 12, 28, 13, 29,
- 14, 30, 15, 31);
- return true;
- case punpckhwd128:
- Result = BuildVectorShuffle(Ops[0], Ops[1], 4, 12, 5, 13, 6, 14, 7, 15);
- return true;
- case punpckhdq128:
- Result = BuildVectorShuffle(Ops[0], Ops[1], 2, 6, 3, 7);
- return true;
- case punpckhqdq128:
- Result = BuildVectorShuffle(Ops[0], Ops[1], 1, 3);
- return true;
- case punpcklbw128:
- Result = BuildVectorShuffle(Ops[0], Ops[1], 0, 16, 1, 17,
- 2, 18, 3, 19,
- 4, 20, 5, 21,
- 6, 22, 7, 23);
- return true;
- case punpcklwd128:
- Result = BuildVectorShuffle(Ops[0], Ops[1], 0, 8, 1, 9, 2, 10, 3, 11);
- return true;
- case punpckldq128:
- Result = BuildVectorShuffle(Ops[0], Ops[1], 0, 4, 1, 5);
- return true;
- case punpcklqdq128:
- Result = BuildVectorShuffle(Ops[0], Ops[1], 0, 2);
- return true;
- case unpckhps:
- Result = BuildVectorShuffle(Ops[0], Ops[1], 2, 6, 3, 7);
- return true;
- case unpckhpd:
- Result = BuildVectorShuffle(Ops[0], Ops[1], 1, 3);
- return true;
- case unpcklps:
- Result = BuildVectorShuffle(Ops[0], Ops[1], 0, 4, 1, 5);
- return true;
- case unpcklpd:
- Result = BuildVectorShuffle(Ops[0], Ops[1], 0, 2);
- return true;
- case movhlps:
- Result = BuildVectorShuffle(Ops[0], Ops[1], 6, 7, 2, 3);
- return true;
- case movlhps:
- Result = BuildVectorShuffle(Ops[0], Ops[1], 0, 1, 4, 5);
- return true;
- case movss:
- Result = BuildVectorShuffle(Ops[0], Ops[1], 4, 1, 2, 3);
- return true;
- case movsd:
- Result = BuildVectorShuffle(Ops[0], Ops[1], 2, 1);
- return true;
- case movq128: {
- Value *Zero = Constant::getNullValue(Ops[0]->getType());
- Result = BuildVectorShuffle(Zero, Ops[0], 2, 1);
- return true;
- }
-//TODO IX86_BUILTIN_LOADQ: {
-//TODO const PointerType *i64Ptr = Type::getInt64PtrTy(Context);
-//TODO Ops[0] = Builder.CreateBitCast(Ops[0], i64Ptr);
-//TODO Ops[0] = Builder.CreateLoad(Ops[0]);
-//TODO Value *Zero = ConstantInt::get(Type::getInt64Ty(Context), 0);
-//TODO Result = BuildVector(Zero, Zero, NULL);
-//TODO Value *Idx = ConstantInt::get(Type::getInt32Ty(Context), 0);
-//TODO Result = Builder.CreateInsertElement(Result, Ops[0], Idx);
-//TODO Result = Builder.CreateBitCast(Result, ResultType);
-//TODO return true;
-//TODO }
- case loadups: {
- VectorType *v4f32 = VectorType::get(Type::getFloatTy(Context), 4);
- const PointerType *v4f32Ptr = v4f32->getPointerTo();
- Value *BC = Builder.CreateBitCast(Ops[0], v4f32Ptr);
- LoadInst *LI = Builder.CreateLoad(BC);
- LI->setAlignment(1);
- Result = LI;
- return true;
- }
- case loadupd: {
- VectorType *v2f64 = VectorType::get(Type::getDoubleTy(Context), 2);
- const PointerType *v2f64Ptr = v2f64->getPointerTo();
- Value *BC = Builder.CreateBitCast(Ops[0], v2f64Ptr);
- LoadInst *LI = Builder.CreateLoad(BC);
- LI->setAlignment(1);
- Result = LI;
- return true;
- }
- case loaddqu: {
- VectorType *v16i8 = VectorType::get(Type::getInt8Ty(Context), 16);
- const PointerType *v16i8Ptr = v16i8->getPointerTo();
- Value *BC = Builder.CreateBitCast(Ops[0], v16i8Ptr);
- LoadInst *LI = Builder.CreateLoad(BC);
- LI->setAlignment(1);
- Result = LI;
- return true;
- }
- case storeups: {
- VectorType *v4f32 = VectorType::get(Type::getFloatTy(Context), 4);
- const PointerType *v4f32Ptr = v4f32->getPointerTo();
- Value *BC = Builder.CreateBitCast(Ops[0], v4f32Ptr);
- StoreInst *SI = Builder.CreateStore(Ops[1], BC);
- SI->setAlignment(1);
- return true;
- }
- case storeupd: {
- VectorType *v2f64 = VectorType::get(Type::getDoubleTy(Context), 2);
- const PointerType *v2f64Ptr = v2f64->getPointerTo();
- Value *BC = Builder.CreateBitCast(Ops[0], v2f64Ptr);
- StoreInst *SI = Builder.CreateStore(Ops[1], BC);
- SI->setAlignment(1);
- return true;
- }
- case storedqu: {
- VectorType *v16i8 = VectorType::get(Type::getInt8Ty(Context), 16);
- const PointerType *v16i8Ptr = v16i8->getPointerTo();
- Value *BC = Builder.CreateBitCast(Ops[0], v16i8Ptr);
- StoreInst *SI = Builder.CreateStore(Ops[1], BC);
- SI->setAlignment(1);
- return true;
- }
- case loadhps: {
- const PointerType *f64Ptr = Type::getDoublePtrTy(Context);
- Ops[1] = Builder.CreateBitCast(Ops[1], f64Ptr);
- Value *Load = Builder.CreateLoad(Ops[1]);
- Ops[1] = BuildVector(Load, UndefValue::get(Type::getDoubleTy(Context)), NULL);
- Ops[1] = Builder.CreateBitCast(Ops[1], ResultType);
- Result = BuildVectorShuffle(Ops[0], Ops[1], 0, 1, 4, 5);
- Result = Builder.CreateBitCast(Result, ResultType);
- return true;
- }
- case loadlps: {
- const PointerType *f64Ptr = Type::getDoublePtrTy(Context);
- Ops[1] = Builder.CreateBitCast(Ops[1], f64Ptr);
- Value *Load = Builder.CreateLoad(Ops[1]);
- Ops[1] = BuildVector(Load, UndefValue::get(Type::getDoubleTy(Context)), NULL);
- Ops[1] = Builder.CreateBitCast(Ops[1], ResultType);
- Result = BuildVectorShuffle(Ops[0], Ops[1], 4, 5, 2, 3);
- Result = Builder.CreateBitCast(Result, ResultType);
- return true;
- }
- case loadhpd: {
- Value *Load = Builder.CreateLoad(Ops[1]);
- Ops[1] = BuildVector(Load, UndefValue::get(Type::getDoubleTy(Context)), NULL);
- Ops[1] = Builder.CreateBitCast(Ops[1], ResultType);
- Result = BuildVectorShuffle(Ops[0], Ops[1], 0, 2);
- Result = Builder.CreateBitCast(Result, ResultType);
- return true;
- }
- case loadlpd: {
- Value *Load = Builder.CreateLoad(Ops[1]);
- Ops[1] = BuildVector(Load, UndefValue::get(Type::getDoubleTy(Context)), NULL);
- Ops[1] = Builder.CreateBitCast(Ops[1], ResultType);
- Result = BuildVectorShuffle(Ops[0], Ops[1], 2, 1);
- Result = Builder.CreateBitCast(Result, ResultType);
- return true;
- }
- case storehps: {
- VectorType *v2f64 = VectorType::get(Type::getDoubleTy(Context), 2);
- const PointerType *f64Ptr = Type::getDoublePtrTy(Context);
- Ops[0] = Builder.CreateBitCast(Ops[0], f64Ptr);
- Value *Idx = ConstantInt::get(Type::getInt32Ty(Context), 1);
- Ops[1] = Builder.CreateBitCast(Ops[1], v2f64);
- Ops[1] = Builder.CreateExtractElement(Ops[1], Idx);
- Builder.CreateStore(Ops[1], Ops[0]);
- return true;
- }
- case storelps: {
- VectorType *v2f64 = VectorType::get(Type::getDoubleTy(Context), 2);
- const PointerType *f64Ptr = Type::getDoublePtrTy(Context);
- Ops[0] = Builder.CreateBitCast(Ops[0], f64Ptr);
- Value *Idx = ConstantInt::get(Type::getInt32Ty(Context), 0);
- Ops[1] = Builder.CreateBitCast(Ops[1], v2f64);
- Ops[1] = Builder.CreateExtractElement(Ops[1], Idx);
- Builder.CreateStore(Ops[1], Ops[0]);
- return true;
- }
- case movshdup:
- Result = BuildVectorShuffle(Ops[0], Ops[0], 1, 1, 3, 3);
- return true;
- case movsldup:
- Result = BuildVectorShuffle(Ops[0], Ops[0], 0, 0, 2, 2);
- return true;
- case vec_init_v2si:
- Result = BuildVector(Ops[0], Ops[1], NULL);
- return true;
- case vec_init_v4hi:
- // Sometimes G++ promotes arguments to int.
- for (unsigned i = 0; i != 4; ++i)
- Ops[i] = Builder.CreateIntCast(Ops[i], Type::getInt16Ty(Context),
- /*isSigned*/false);
- Result = BuildVector(Ops[0], Ops[1], Ops[2], Ops[3], NULL);
- return true;
- case vec_init_v8qi:
- // Sometimes G++ promotes arguments to int.
- for (unsigned i = 0; i != 8; ++i)
- Ops[i] = Builder.CreateIntCast(Ops[i], Type::getInt8Ty(Context),
- /*isSigned*/false);
- Result = BuildVector(Ops[0], Ops[1], Ops[2], Ops[3],
- Ops[4], Ops[5], Ops[6], Ops[7], NULL);
- return true;
- case vec_ext_v2si:
- case vec_ext_v4hi:
- case vec_ext_v2df:
- case vec_ext_v2di:
- case vec_ext_v4si:
- case vec_ext_v4sf:
- case vec_ext_v8hi:
- case vec_ext_v16qi:
- Result = Builder.CreateExtractElement(Ops[0], Ops[1]);
- return true;
- case vec_set_v16qi:
- // Sometimes G++ promotes arguments to int.
- Ops[1] = Builder.CreateIntCast(Ops[1], Type::getInt8Ty(Context),
- /*isSigned*/false);
- Result = Builder.CreateInsertElement(Ops[0], Ops[1], Ops[2]);
- return true;
- case vec_set_v4hi:
- case vec_set_v8hi:
- // GCC sometimes doesn't produce the right element type.
- Ops[1] = Builder.CreateIntCast(Ops[1], Type::getInt16Ty(Context),
- /*isSigned*/false);
- Result = Builder.CreateInsertElement(Ops[0], Ops[1], Ops[2]);
- return true;
- case vec_set_v4si:
- Result = Builder.CreateInsertElement(Ops[0], Ops[1], Ops[2]);
- return true;
- case vec_set_v2di:
- Result = Builder.CreateInsertElement(Ops[0], Ops[1], Ops[2]);
- return true;
-
- case cmpeqps: PredCode = 0; goto CMPXXPS;
- case cmpltps: PredCode = 1; goto CMPXXPS;
- case cmpgtps: PredCode = 1; flip = true; goto CMPXXPS;
- case cmpleps: PredCode = 2; goto CMPXXPS;
- case cmpgeps: PredCode = 2; flip = true; goto CMPXXPS;
- case cmpunordps: PredCode = 3; goto CMPXXPS;
- case cmpneqps: PredCode = 4; goto CMPXXPS;
- case cmpnltps: PredCode = 5; goto CMPXXPS;
- case cmpngtps: PredCode = 5; flip = true; goto CMPXXPS;
- case cmpnleps: PredCode = 6; goto CMPXXPS;
- case cmpngeps: PredCode = 6; flip = true; goto CMPXXPS;
- case cmpordps: PredCode = 7; goto CMPXXPS;
- CMPXXPS: {
- Function *cmpps =
- Intrinsic::getDeclaration(TheModule, Intrinsic::x86_sse_cmp_ps);
- Value *Pred = ConstantInt::get(Type::getInt8Ty(Context), PredCode);
- Value *Arg0 = Ops[0];
- Value *Arg1 = Ops[1];
- if (flip) std::swap(Arg0, Arg1);
- Value *CallOps[3] = { Arg0, Arg1, Pred };
- Result = Builder.CreateCall(cmpps, CallOps, CallOps+3);
- Result = Builder.CreateBitCast(Result, ResultType);
- return true;
- }
- case cmpeqss: PredCode = 0; goto CMPXXSS;
- case cmpltss: PredCode = 1; goto CMPXXSS;
- case cmpless: PredCode = 2; goto CMPXXSS;
- case cmpunordss: PredCode = 3; goto CMPXXSS;
- case cmpneqss: PredCode = 4; goto CMPXXSS;
- case cmpnltss: PredCode = 5; goto CMPXXSS;
- case cmpnless: PredCode = 6; goto CMPXXSS;
- case cmpordss: PredCode = 7; goto CMPXXSS;
- CMPXXSS: {
- Function *cmpss =
- Intrinsic::getDeclaration(TheModule, Intrinsic::x86_sse_cmp_ss);
- Value *Pred = ConstantInt::get(Type::getInt8Ty(Context), PredCode);
- Value *CallOps[3] = { Ops[0], Ops[1], Pred };
- Result = Builder.CreateCall(cmpss, CallOps, CallOps+3);
- Result = Builder.CreateBitCast(Result, ResultType);
- return true;
- }
- case cmpeqpd: PredCode = 0; goto CMPXXPD;
- case cmpltpd: PredCode = 1; goto CMPXXPD;
- case cmpgtpd: PredCode = 1; flip = true; goto CMPXXPD;
- case cmplepd: PredCode = 2; goto CMPXXPD;
- case cmpgepd: PredCode = 2; flip = true; goto CMPXXPD;
- case cmpunordpd: PredCode = 3; goto CMPXXPD;
- case cmpneqpd: PredCode = 4; goto CMPXXPD;
- case cmpnltpd: PredCode = 5; goto CMPXXPD;
- case cmpngtpd: PredCode = 5; flip = true; goto CMPXXPD;
- case cmpnlepd: PredCode = 6; goto CMPXXPD;
- case cmpngepd: PredCode = 6; flip = true; goto CMPXXPD;
- case cmpordpd: PredCode = 7; goto CMPXXPD;
- CMPXXPD: {
- Function *cmppd =
- Intrinsic::getDeclaration(TheModule, Intrinsic::x86_sse2_cmp_pd);
- Value *Pred = ConstantInt::get(Type::getInt8Ty(Context), PredCode);
- Value *Arg0 = Ops[0];
- Value *Arg1 = Ops[1];
- if (flip) std::swap(Arg0, Arg1);
-
- Value *CallOps[3] = { Arg0, Arg1, Pred };
- Result = Builder.CreateCall(cmppd, CallOps, CallOps+3);
- Result = Builder.CreateBitCast(Result, ResultType);
- return true;
- }
- case cmpeqsd: PredCode = 0; goto CMPXXSD;
- case cmpltsd: PredCode = 1; goto CMPXXSD;
- case cmplesd: PredCode = 2; goto CMPXXSD;
- case cmpunordsd: PredCode = 3; goto CMPXXSD;
- case cmpneqsd: PredCode = 4; goto CMPXXSD;
- case cmpnltsd: PredCode = 5; goto CMPXXSD;
- case cmpnlesd: PredCode = 6; goto CMPXXSD;
- case cmpordsd: PredCode = 7; goto CMPXXSD;
- CMPXXSD: {
- Function *cmpsd =
- Intrinsic::getDeclaration(TheModule, Intrinsic::x86_sse2_cmp_sd);
- Value *Pred = ConstantInt::get(Type::getInt8Ty(Context), PredCode);
- Value *CallOps[3] = { Ops[0], Ops[1], Pred };
- Result = Builder.CreateCall(cmpsd, CallOps, CallOps+3);
- Result = Builder.CreateBitCast(Result, ResultType);
- return true;
- }
- case ldmxcsr: {
- Function *ldmxcsr =
- Intrinsic::getDeclaration(TheModule, Intrinsic::x86_sse_ldmxcsr);
- Value *Ptr = CreateTemporary(Type::getInt32Ty(Context));
- Builder.CreateStore(Ops[0], Ptr);
- Ptr = Builder.CreateBitCast(Ptr, Type::getInt8PtrTy(Context));
- Builder.CreateCall(ldmxcsr, Ptr);
- return true;
- }
- case stmxcsr: {
- Function *stmxcsr =
- Intrinsic::getDeclaration(TheModule, Intrinsic::x86_sse_stmxcsr);
- Value *Ptr = CreateTemporary(Type::getInt32Ty(Context));
- Value *BPtr = Builder.CreateBitCast(Ptr, Type::getInt8PtrTy(Context));
- Builder.CreateCall(stmxcsr, BPtr);
-
- Result = Builder.CreateLoad(Ptr);
- return true;
- }
- case palignr: {
- if (isa<ConstantInt>(Ops[2])) {
-
- // In the header we multiply by 8, correct that back now.
- unsigned shiftVal = (cast<ConstantInt>(Ops[2])->getZExtValue())/8;
-
- // If palignr is shifting the pair of input vectors less than 9 bytes,
- // emit a shuffle instruction.
- if (shiftVal <= 8) {
- const llvm::Type *IntTy = Type::getInt32Ty(Context);
- const llvm::Type *EltTy = Type::getInt8Ty(Context);
- const llvm::Type *VecTy = VectorType::get(EltTy, 8);
-
- Ops[1] = Builder.CreateBitCast(Ops[1], VecTy);
- Ops[0] = Builder.CreateBitCast(Ops[0], VecTy);
-
- SmallVector<Constant*, 8> Indices;
- for (unsigned i = 0; i != 8; ++i)
- Indices.push_back(ConstantInt::get(IntTy, shiftVal + i));
-
- Value* SV = ConstantVector::get(Indices);
- Result = Builder.CreateShuffleVector(Ops[1], Ops[0], SV, "palignr");
- return true;
- }
-
- // If palignr is shifting the pair of input vectors more than 8 but less
- // than 16 bytes, emit a logical right shift of the destination.
- if (shiftVal < 16) {
- // MMX has these as 1 x i64 vectors for some odd optimization reasons.
- const llvm::Type *EltTy = Type::getInt64Ty(Context);
- const llvm::Type *VecTy = VectorType::get(EltTy, 1);
-
- Ops[0] = Builder.CreateBitCast(Ops[0], VecTy, "cast");
- Ops[1] = ConstantInt::get(VecTy, (shiftVal-8) * 8);
-
- // create i32 constant
- Function *F = Intrinsic::getDeclaration(TheModule,
- Intrinsic::x86_mmx_psrl_q);
- Result = Builder.CreateCall(F, &Ops[0], &Ops[0] + 2, "palignr");
- return true;
- }
-
- // If palignr is shifting the pair of vectors more than 32 bytes,
- // emit zero.
- Result = Constant::getNullValue(ResultType);
- return true;
- } else {
- error_at(gimple_location(stmt), "mask must be an immediate");
- Result = Ops[0];
- return true;
- }
- }
- case palignr128: {
- if (isa<ConstantInt>(Ops[2])) {
-
- // In the header we multiply by 8, correct that back now.
- unsigned shiftVal = (cast<ConstantInt>(Ops[2])->getZExtValue())/8;
-
- // If palignr is shifting the pair of input vectors less than 17 bytes,
- // emit a shuffle instruction.
- if (shiftVal <= 16) {
- const llvm::Type *IntTy = Type::getInt32Ty(Context);
- const llvm::Type *EltTy = Type::getInt8Ty(Context);
- const llvm::Type *VecTy = VectorType::get(EltTy, 16);
-
- Ops[1] = Builder.CreateBitCast(Ops[1], VecTy);
- Ops[0] = Builder.CreateBitCast(Ops[0], VecTy);
-
- llvm::SmallVector<Constant*, 16> Indices;
- for (unsigned i = 0; i != 16; ++i)
- Indices.push_back(ConstantInt::get(IntTy, shiftVal + i));
-
- Value* SV = ConstantVector::get(Indices);
- Result = Builder.CreateShuffleVector(Ops[1], Ops[0], SV, "palignr");
- return true;
- }
-
- // If palignr is shifting the pair of input vectors more than 16 but less
- // than 32 bytes, emit a logical right shift of the destination.
- if (shiftVal < 32) {
- const llvm::Type *EltTy = Type::getInt64Ty(Context);
- const llvm::Type *VecTy = VectorType::get(EltTy, 2);
- const llvm::Type *IntTy = Type::getInt32Ty(Context);
-
- Ops[0] = Builder.CreateBitCast(Ops[0], VecTy, "cast");
- Ops[1] = ConstantInt::get(IntTy, (shiftVal-16) * 8);
-
- // create i32 constant
- llvm::Function *F = Intrinsic::getDeclaration(TheModule,
- Intrinsic::x86_sse2_psrl_dq);
- Result = Builder.CreateCall(F, &Ops[0], &Ops[0] + 2, "palignr");
- return true;
- }
-
- // If palignr is shifting the pair of vectors more than 32 bytes, emit zero.
- Result = Constant::getNullValue(ResultType);
- return true;
- } else {
- error_at(gimple_location(stmt), "mask must be an immediate");
- Result = Ops[0];
- return true;
- }
- }
- }
- llvm_unreachable("Builtin not implemented!");
-}
-
-/* These are defined in i386.c */
-#define MAX_CLASSES 4
-extern "C" enum machine_mode type_natural_mode(tree, CUMULATIVE_ARGS *);
-extern "C" int examine_argument(enum machine_mode, const_tree, int, int*, int*);
-extern "C" int classify_argument(enum machine_mode, const_tree,
- enum x86_64_reg_class classes[MAX_CLASSES], int);
-
-/* Target hook for llvm-abi.h. It returns true if an aggregate of the
- specified type should be passed in memory. This is only called for
- x86-64. */
-static bool llvm_x86_64_should_pass_aggregate_in_memory(tree TreeType,
- enum machine_mode Mode){
- int IntRegs, SSERegs;
- /* If examine_argument return 0, then it's passed byval in memory.*/
- int ret = examine_argument(Mode, TreeType, 0, &IntRegs, &SSERegs);
- if (ret==0)
- return true;
- if (ret==1 && IntRegs==0 && SSERegs==0) // zero-sized struct
- return true;
- return false;
-}
-
-/* Returns true if all elements of the type are integer types. */
-static bool llvm_x86_is_all_integer_types(const Type *Ty) {
- for (Type::subtype_iterator I = Ty->subtype_begin(), E = Ty->subtype_end();
- I != E; ++I) {
- const Type *STy = I->get();
- if (!STy->isIntOrIntVectorTy() && !STy->isPointerTy())
- return false;
- }
- return true;
-}
-
-/* Target hook for llvm-abi.h. It returns true if an aggregate of the
- specified type should be passed in a number of registers of mixed types.
- It also returns a vector of types that correspond to the registers used
- for parameter passing. This is only called for x86-32. */
-bool
-llvm_x86_32_should_pass_aggregate_in_mixed_regs(tree TreeType, const Type *Ty,
- std::vector<const Type*> &Elts){
- // If this is a small fixed size type, investigate it.
- HOST_WIDE_INT SrcSize = int_size_in_bytes(TreeType);
- if (SrcSize <= 0 || SrcSize > 16)
- return false;
-
- // X86-32 passes aggregates on the stack. If this is an extremely simple
- // aggregate whose elements would be passed the same if passed as scalars,
- // pass them that way in order to promote SROA on the caller and callee side.
- // Note that we can't support passing all structs this way. For example,
- // {i16, i16} should be passed in on 32-bit unit, which is not how "i16, i16"
- // would be passed as stand-alone arguments.
- const StructType *STy = dyn_cast<StructType>(Ty);
- if (!STy || STy->isPacked()) return false;
-
- for (unsigned i = 0, e = STy->getNumElements(); i != e; ++i) {
- const Type *EltTy = STy->getElementType(i);
- // 32 and 64-bit integers are fine, as are float and double. Long double
- // (which can be picked as the type for a union of 16 bytes) is not fine,
- // as loads and stores of it get only 10 bytes.
- if (EltTy == Type::getInt32Ty(Context) ||
- EltTy == Type::getInt64Ty(Context) ||
- EltTy == Type::getFloatTy(Context) ||
- EltTy == Type::getDoubleTy(Context) ||
- EltTy->isPointerTy()) {
- Elts.push_back(EltTy);
- continue;
- }
-
- // TODO: Vectors are also ok to pass if they don't require extra alignment.
- // TODO: We can also pass structs like {i8, i32}.
-
- Elts.clear();
- return false;
- }
-
- return true;
-}
-
-/* It returns true if an aggregate of the specified type should be passed as a
- first class aggregate. */
-bool llvm_x86_should_pass_aggregate_as_fca(tree type, const Type *Ty) {
- if (TREE_CODE(type) != COMPLEX_TYPE)
- return false;
- const StructType *STy = dyn_cast<StructType>(Ty);
- if (!STy || STy->isPacked()) return false;
-
- // FIXME: Currently codegen isn't lowering most _Complex types in a way that
- // makes it ABI compatible for x86-64. Same for _Complex char and _Complex
- // short in 32-bit.
- const Type *EltTy = STy->getElementType(0);
- return !((TARGET_64BIT && (EltTy->isIntegerTy() ||
- EltTy == Type::getFloatTy(Context) ||
- EltTy == Type::getDoubleTy(Context))) ||
- EltTy->isIntegerTy(16) || EltTy->isIntegerTy(8));
-}
-
-/* Target hook for llvm-abi.h. It returns true if an aggregate of the
- specified type should be passed in memory. */
-bool llvm_x86_should_pass_aggregate_in_memory(tree TreeType, const Type *Ty) {
- if (llvm_x86_should_pass_aggregate_as_fca(TreeType, Ty))
- return false;
-
- enum machine_mode Mode = type_natural_mode(TreeType, NULL);
- HOST_WIDE_INT Bytes =
- (Mode == BLKmode) ? int_size_in_bytes(TreeType) : (int) GET_MODE_SIZE(Mode);
-
- // Zero sized array, struct, or class, not passed in memory.
- if (Bytes == 0)
- return false;
-
- if (!TARGET_64BIT) {
- std::vector<const Type*> Elts;
- return !llvm_x86_32_should_pass_aggregate_in_mixed_regs(TreeType, Ty, Elts);
- }
- return llvm_x86_64_should_pass_aggregate_in_memory(TreeType, Mode);
-}
-
-/* count_num_registers_uses - Return the number of GPRs and XMMs parameter
- register used so far. Caller is responsible for initializing outputs. */
-static void count_num_registers_uses(std::vector<const Type*> &ScalarElts,
- unsigned &NumGPRs, unsigned &NumXMMs) {
- for (unsigned i = 0, e = ScalarElts.size(); i != e; ++i) {
- const Type *Ty = ScalarElts[i];
- if (const VectorType *VTy = dyn_cast<VectorType>(Ty)) {
- if (!TARGET_MACHO)
- continue;
- if (VTy->getNumElements() == 1)
- // v1i64 is passed in GPRs on Darwin.
- ++NumGPRs;
- else
- // All other vector scalar values are passed in XMM registers.
- ++NumXMMs;
- } else if (Ty->isIntegerTy() || Ty->isPointerTy()) {
- ++NumGPRs;
- } else if (Ty==Type::getVoidTy(Context)) {
- // Padding bytes that are not passed anywhere
- ;
- } else {
- // Floating point scalar argument.
- assert(Ty->isFloatingPointTy() && Ty->isPrimitiveType() &&
- "Expecting a floating point primitive type!");
- if (Ty->getTypeID() == Type::FloatTyID
- || Ty->getTypeID() == Type::DoubleTyID)
- ++NumXMMs;
- }
- }
-}
-
-/* Target hook for llvm-abi.h. This is called when an aggregate is being passed
- in registers. If there are only enough available parameter registers to pass
- part of the aggregate, return true. That means the aggregate should instead
- be passed in memory. */
-bool
-llvm_x86_64_aggregate_partially_passed_in_regs(std::vector<const Type*> &Elts,
- std::vector<const Type*> &ScalarElts,
- bool isShadowReturn) {
- // Counting number of GPRs and XMMs used so far. According to AMD64 ABI
- // document: "If there are no registers available for any eightbyte of an
- // argument, the whole argument is passed on the stack." X86-64 uses 6
- // integer
- // For example, if two GPRs are required but only one is available, then
- // both parts will be in memory.
- // FIXME: This is a temporary solution. To be removed when llvm has first
- // class aggregate values.
- unsigned NumGPRs = isShadowReturn ? 1 : 0;
- unsigned NumXMMs = 0;
- count_num_registers_uses(ScalarElts, NumGPRs, NumXMMs);
-
- unsigned NumGPRsNeeded = 0;
- unsigned NumXMMsNeeded = 0;
- count_num_registers_uses(Elts, NumGPRsNeeded, NumXMMsNeeded);
-
- bool GPRsSatisfied = true;
- if (NumGPRsNeeded) {
- if (NumGPRs < 6) {
- if ((NumGPRs + NumGPRsNeeded) > 6)
- // Only partially satisfied.
- return true;
- } else
- GPRsSatisfied = false;
- }
-
- bool XMMsSatisfied = true;
- if (NumXMMsNeeded) {
- if (NumXMMs < 8) {
- if ((NumXMMs + NumXMMsNeeded) > 8)
- // Only partially satisfied.
- return true;
- } else
- XMMsSatisfied = false;
- }
-
- return !GPRsSatisfied || !XMMsSatisfied;
-}
-
-/* Target hook for llvm-abi.h. It returns true if an aggregate of the
- specified type should be passed in a number of registers of mixed types.
- It also returns a vector of types that correspond to the registers used
- for parameter passing. This is only called for x86-64. */
-bool
-llvm_x86_64_should_pass_aggregate_in_mixed_regs(tree TreeType, const Type *Ty,
- std::vector<const Type*> &Elts){
- if (llvm_x86_should_pass_aggregate_as_fca(TreeType, Ty))
- return false;
-
- enum x86_64_reg_class Class[MAX_CLASSES];
- enum machine_mode Mode = type_natural_mode(TreeType, NULL);
- bool totallyEmpty = true;
- HOST_WIDE_INT Bytes =
- (Mode == BLKmode) ? int_size_in_bytes(TreeType) : (int) GET_MODE_SIZE(Mode);
- int NumClasses = classify_argument(Mode, TreeType, Class, 0);
- if (!NumClasses)
- return false;
-
- if (NumClasses == 1 && Class[0] == X86_64_INTEGERSI_CLASS)
- // This will fit in one i32 register.
- return false;
-
- for (int i = 0; i < NumClasses; ++i) {
- switch (Class[i]) {
- case X86_64_INTEGER_CLASS:
- case X86_64_INTEGERSI_CLASS:
- Elts.push_back(Type::getInt64Ty(Context));
- totallyEmpty = false;
- Bytes -= 8;
- break;
- case X86_64_SSE_CLASS:
- totallyEmpty = false;
- // If it's a SSE class argument, then one of the followings are possible:
- // 1. 1 x SSE, size is 8: 1 x Double.
- // 2. 1 x SSE, size is 4: 1 x Float.
- // 3. 1 x SSE + 1 x SSEUP, size is 16: 1 x <4 x i32>, <4 x f32>,
- // <2 x i64>, or <2 x f64>.
- // 4. 1 x SSE + 1 x SSESF, size is 12: 1 x Double, 1 x Float.
- // 5. 2 x SSE, size is 16: 2 x Double.
- if ((NumClasses-i) == 1) {
- if (Bytes == 8) {
- Elts.push_back(Type::getDoubleTy(Context));
- Bytes -= 8;
- } else if (Bytes == 4) {
- Elts.push_back (Type::getFloatTy(Context));
- Bytes -= 4;
- } else
- assert(0 && "Not yet handled!");
- } else if ((NumClasses-i) == 2) {
- if (Class[i+1] == X86_64_SSEUP_CLASS) {
- const Type *Ty = ConvertType(TreeType);
- if (const StructType *STy = dyn_cast<StructType>(Ty))
- // Look pass the struct wrapper.
- if (STy->getNumElements() == 1)
- Ty = STy->getElementType(0);
- if (const VectorType *VTy = dyn_cast<VectorType>(Ty)) {
- if (VTy->getNumElements() == 2) {
- if (VTy->getElementType()->isIntegerTy()) {
- Elts.push_back(VectorType::get(Type::getInt64Ty(Context), 2));
- } else {
- Elts.push_back(VectorType::get(Type::getDoubleTy(Context), 2));
- }
- Bytes -= 8;
- } else {
- assert(VTy->getNumElements() == 4);
- if (VTy->getElementType()->isIntegerTy()) {
- Elts.push_back(VectorType::get(Type::getInt32Ty(Context), 4));
- } else {
- Elts.push_back(VectorType::get(Type::getFloatTy(Context), 4));
- }
- Bytes -= 4;
- }
- } else if (llvm_x86_is_all_integer_types(Ty)) {
- Elts.push_back(VectorType::get(Type::getInt32Ty(Context), 4));
- Bytes -= 4;
- } else {
- Elts.push_back(VectorType::get(Type::getFloatTy(Context), 4));
- Bytes -= 4;
- }
- } else if (Class[i+1] == X86_64_SSESF_CLASS) {
- assert(Bytes == 12 && "Not yet handled!");
- Elts.push_back(Type::getDoubleTy(Context));
- Elts.push_back(Type::getFloatTy(Context));
- Bytes -= 12;
- } else if (Class[i+1] == X86_64_SSE_CLASS) {
- Elts.push_back(Type::getDoubleTy(Context));
- Elts.push_back(Type::getDoubleTy(Context));
- Bytes -= 16;
- } else if (Class[i+1] == X86_64_SSEDF_CLASS && Bytes == 16) {
- Elts.push_back(VectorType::get(Type::getFloatTy(Context), 2));
- Elts.push_back(Type::getDoubleTy(Context));
- } else if (Class[i+1] == X86_64_INTEGER_CLASS) {
- Elts.push_back(VectorType::get(Type::getFloatTy(Context), 2));
- Elts.push_back(Type::getInt64Ty(Context));
- } else if (Class[i+1] == X86_64_NO_CLASS) {
- // padding bytes, don't pass
- Elts.push_back(Type::getDoubleTy(Context));
- Elts.push_back(Type::getVoidTy(Context));
- Bytes -= 16;
- } else
- assert(0 && "Not yet handled!");
- ++i; // Already handled the next one.
- } else
- assert(0 && "Not yet handled!");
- break;
- case X86_64_SSESF_CLASS:
- totallyEmpty = false;
- Elts.push_back(Type::getFloatTy(Context));
- Bytes -= 4;
- break;
- case X86_64_SSEDF_CLASS:
- totallyEmpty = false;
- Elts.push_back(Type::getDoubleTy(Context));
- Bytes -= 8;
- break;
- case X86_64_X87_CLASS:
- case X86_64_X87UP_CLASS:
- case X86_64_COMPLEX_X87_CLASS:
- return false;
- case X86_64_NO_CLASS:
- // Padding bytes that are not passed (unless the entire object consists
- // of padding)
- Elts.push_back(Type::getVoidTy(Context));
- Bytes -= 8;
- break;
- default: assert(0 && "Unexpected register class!");
- }
- }
-
- return !totallyEmpty;
-}
-
-/* On Darwin x86-32, vectors which are not MMX nor SSE should be passed as
- integers. On Darwin x86-64, such vectors bigger than 128 bits should be
- passed in memory (byval). */
-bool llvm_x86_should_pass_vector_in_integer_regs(tree type) {
- if (!TARGET_MACHO)
- return false;
- if (TREE_CODE(type) == VECTOR_TYPE &&
- TYPE_SIZE(type) &&
- TREE_CODE(TYPE_SIZE(type))==INTEGER_CST) {
- if (TREE_INT_CST_LOW(TYPE_SIZE(type))==64 && TARGET_MMX)
- return false;
- if (TREE_INT_CST_LOW(TYPE_SIZE(type))==128 && TARGET_SSE)
- return false;
- if (TARGET_64BIT && TREE_INT_CST_LOW(TYPE_SIZE(type)) > 128)
- return false;
- }
- return true;
-}
-
-/* On Darwin x86-64, vectors which are bigger than 128 bits should be passed
- byval (in memory). */
-bool llvm_x86_should_pass_vector_using_byval_attr(tree type) {
- if (!TARGET_MACHO)
- return false;
- if (!TARGET_64BIT)
- return false;
- if (TREE_CODE(type) == VECTOR_TYPE &&
- TYPE_SIZE(type) &&
- TREE_CODE(TYPE_SIZE(type))==INTEGER_CST) {
- if (TREE_INT_CST_LOW(TYPE_SIZE(type))<=128)
- return false;
- }
- return true;
-}
-
-/* The MMX vector v1i64 is returned in EAX and EDX on Darwin. Communicate
- this by returning i64 here. Likewise, (generic) vectors such as v2i16
- are returned in EAX.
- On Darwin x86-64, v1i64 is returned in RAX and other MMX vectors are
- returned in XMM0. Judging from comments, this would not be right for
- Win64. Don't know about Linux. */
-tree llvm_x86_should_return_vector_as_scalar(tree type, bool isBuiltin) {
- if (TARGET_MACHO &&
- !isBuiltin &&
- TREE_CODE(type) == VECTOR_TYPE &&
- TYPE_SIZE(type) &&
- TREE_CODE(TYPE_SIZE(type))==INTEGER_CST) {
- if (TREE_INT_CST_LOW(TYPE_SIZE(type))==64 &&
- TYPE_VECTOR_SUBPARTS(type)==1)
- return uint64_type_node;
- if (TARGET_64BIT && TREE_INT_CST_LOW(TYPE_SIZE(type))==64)
- return double_type_node;
- if (TREE_INT_CST_LOW(TYPE_SIZE(type))==32)
- return uint32_type_node;
- }
- return 0;
-}
-
-/* MMX vectors are returned in XMM0 on x86-64 Darwin. The easiest way to
- communicate this is pretend they're doubles.
- Judging from comments, this would not be right for Win64. Don't know
- about Linux. */
-tree llvm_x86_should_return_selt_struct_as_scalar(tree type) {
- tree retType = isSingleElementStructOrArray(type, true, false);
- if (!retType || !TARGET_64BIT || !TARGET_MACHO)
- return retType;
- if (TREE_CODE(retType) == VECTOR_TYPE &&
- TYPE_SIZE(retType) &&
- TREE_CODE(TYPE_SIZE(retType))==INTEGER_CST &&
- TREE_INT_CST_LOW(TYPE_SIZE(retType))==64)
- return double_type_node;
- return retType;
-}
-
-/* MMX vectors v2i32, v4i16, v8i8, v2f32 are returned using sret on Darwin
- 32-bit. Vectors bigger than 128 are returned using sret. */
-bool llvm_x86_should_return_vector_as_shadow(tree type, bool isBuiltin) {
- if (TARGET_MACHO &&
- !isBuiltin &&
- !TARGET_64BIT &&
- TREE_CODE(type) == VECTOR_TYPE &&
- TYPE_SIZE(type) &&
- TREE_CODE(TYPE_SIZE(type))==INTEGER_CST) {
- if (TREE_INT_CST_LOW(TYPE_SIZE(type))==64 &&
- TYPE_VECTOR_SUBPARTS(type)>1)
- return true;
- }
- if (TREE_INT_CST_LOW(TYPE_SIZE(type))>128)
- return true;
- return false;
-}
-
-// llvm_x86_should_not_return_complex_in_memory - Return true if TYPE
-// should be returned using multiple value return instruction.
-bool llvm_x86_should_not_return_complex_in_memory(tree type) {
-
- if (!TARGET_64BIT)
- return false;
-
- if (TREE_CODE(type) == COMPLEX_TYPE &&
- TREE_INT_CST_LOW(TYPE_SIZE_UNIT(type)) == 32)
- return true;
-
- return false;
-}
-
-// llvm_suitable_multiple_ret_value_type - Return TRUE if return value
-// of type TY should be returned using multiple value return instruction.
-static bool llvm_suitable_multiple_ret_value_type(const Type *Ty,
- tree TreeType) {
-
- if (!TARGET_64BIT)
- return false;
-
- const StructType *STy = dyn_cast<StructType>(Ty);
- if (!STy)
- return false;
-
- if (llvm_x86_should_not_return_complex_in_memory(TreeType))
- return true;
-
- // Let gcc specific routine answer the question.
- enum x86_64_reg_class Class[MAX_CLASSES];
- enum machine_mode Mode = type_natural_mode(TreeType, NULL);
- int NumClasses = classify_argument(Mode, TreeType, Class, 0);
- if (NumClasses == 0)
- return false;
-
- if (NumClasses == 1 &&
- (Class[0] == X86_64_INTEGERSI_CLASS || Class[0] == X86_64_INTEGER_CLASS))
- // This will fit in one i64 register.
- return false;
-
- if (NumClasses == 2 &&
- (Class[0] == X86_64_NO_CLASS || Class[1] == X86_64_NO_CLASS))
- // One word is padding which is not passed at all; treat this as returning
- // the scalar type of the other word.
- return false;
-
- // Otherwise, use of multiple value return is OK.
- return true;
-}
-
-// llvm_x86_scalar_type_for_struct_return - Return LLVM type if TYPE
-// can be returned as a scalar, otherwise return NULL.
-const Type *llvm_x86_scalar_type_for_struct_return(tree type, unsigned *Offset) {
- *Offset = 0;
- const Type *Ty = ConvertType(type);
- unsigned Size = getTargetData().getTypeAllocSize(Ty);
- if (Size == 0)
- return Type::getVoidTy(Context);
- else if (Size == 1)
- return Type::getInt8Ty(Context);
- else if (Size == 2)
- return Type::getInt16Ty(Context);
- else if (Size <= 4)
- return Type::getInt32Ty(Context);
-
- // Check if Ty should be returned using multiple value return instruction.
- if (llvm_suitable_multiple_ret_value_type(Ty, type))
- return NULL;
-
- if (TARGET_64BIT) {
- // This logic relies on llvm_suitable_multiple_ret_value_type to have
- // removed anything not expected here.
- enum x86_64_reg_class Class[MAX_CLASSES];
- enum machine_mode Mode = type_natural_mode(type, NULL);
- int NumClasses = classify_argument(Mode, type, Class, 0);
- if (NumClasses == 0)
- return Type::getInt64Ty(Context);
-
- if (NumClasses == 1) {
- if (Class[0] == X86_64_INTEGERSI_CLASS ||
- Class[0] == X86_64_INTEGER_CLASS) {
- // one int register
- HOST_WIDE_INT Bytes =
- (Mode == BLKmode) ? int_size_in_bytes(type) :
- (int) GET_MODE_SIZE(Mode);
- if (Bytes>4)
- return Type::getInt64Ty(Context);
- else if (Bytes>2)
- return Type::getInt32Ty(Context);
- else if (Bytes>1)
- return Type::getInt16Ty(Context);
- else
- return Type::getInt8Ty(Context);
- }
- assert(0 && "Unexpected type!");
- }
- if (NumClasses == 2) {
- if (Class[1] == X86_64_NO_CLASS) {
- if (Class[0] == X86_64_INTEGER_CLASS ||
- Class[0] == X86_64_NO_CLASS ||
- Class[0] == X86_64_INTEGERSI_CLASS)
- return Type::getInt64Ty(Context);
- else if (Class[0] == X86_64_SSE_CLASS || Class[0] == X86_64_SSEDF_CLASS)
- return Type::getDoubleTy(Context);
- else if (Class[0] == X86_64_SSESF_CLASS)
- return Type::getFloatTy(Context);
- assert(0 && "Unexpected type!");
- }
- if (Class[0] == X86_64_NO_CLASS) {
- *Offset = 8;
- if (Class[1] == X86_64_INTEGERSI_CLASS ||
- Class[1] == X86_64_INTEGER_CLASS)
- return Type::getInt64Ty(Context);
- else if (Class[1] == X86_64_SSE_CLASS || Class[1] == X86_64_SSEDF_CLASS)
- return Type::getDoubleTy(Context);
- else if (Class[1] == X86_64_SSESF_CLASS)
- return Type::getFloatTy(Context);
- assert(0 && "Unexpected type!");
- }
- assert(0 && "Unexpected type!");
- }
- assert(0 && "Unexpected type!");
- } else {
- if (Size <= 8)
- return Type::getInt64Ty(Context);
- else if (Size <= 16)
- return IntegerType::get(Context, 128);
- else if (Size <= 32)
- return IntegerType::get(Context, 256);
- }
- return NULL;
-}
-
-/// llvm_x86_64_get_multiple_return_reg_classes - Find register classes used
-/// to return Ty. It is expected that Ty requires multiple return values.
-/// This routine uses GCC implementation to find required register classes.
-/// The original implementation of this routine is based on
-/// llvm_x86_64_should_pass_aggregate_in_mixed_regs code.
-void
-llvm_x86_64_get_multiple_return_reg_classes(tree TreeType, const Type * /*Ty*/,
- std::vector<const Type*> &Elts) {
- enum x86_64_reg_class Class[MAX_CLASSES];
- enum machine_mode Mode = type_natural_mode(TreeType, NULL);
- HOST_WIDE_INT Bytes =
- (Mode == BLKmode) ? int_size_in_bytes(TreeType) : (int) GET_MODE_SIZE(Mode);
- int NumClasses = classify_argument(Mode, TreeType, Class, 0);
- if (!NumClasses)
- assert(0 && "This type does not need multiple return registers!");
-
- if (NumClasses == 1 && Class[0] == X86_64_INTEGERSI_CLASS)
- // This will fit in one i32 register.
- assert(0 && "This type does not need multiple return registers!");
-
- if (NumClasses == 1 && Class[0] == X86_64_INTEGER_CLASS)
- assert(0 && "This type does not need multiple return registers!");
-
- // classify_argument uses a single X86_64_NO_CLASS as a special case for
- // empty structs. Recognize it and don't add any return values in that
- // case.
- if (NumClasses == 1 && Class[0] == X86_64_NO_CLASS)
- return;
-
- for (int i = 0; i < NumClasses; ++i) {
- switch (Class[i]) {
- case X86_64_INTEGER_CLASS:
- case X86_64_INTEGERSI_CLASS:
- Elts.push_back(Type::getInt64Ty(Context));
- Bytes -= 8;
- break;
- case X86_64_SSE_CLASS:
- // If it's a SSE class argument, then one of the followings are possible:
- // 1. 1 x SSE, size is 8: 1 x Double.
- // 2. 1 x SSE, size is 4: 1 x Float.
- // 3. 1 x SSE + 1 x SSEUP, size is 16: 1 x <4 x i32>, <4 x f32>,
- // <2 x i64>, or <2 x f64>.
- // 4. 1 x SSE + 1 x SSESF, size is 12: 1 x Double, 1 x Float.
- // 5. 2 x SSE, size is 16: 2 x Double.
- // 6. 1 x SSE, 1 x NO: Second is padding, pass as double.
- if ((NumClasses-i) == 1) {
- if (Bytes == 8) {
- Elts.push_back(Type::getDoubleTy(Context));
- Bytes -= 8;
- } else if (Bytes == 4) {
- Elts.push_back(Type::getFloatTy(Context));
- Bytes -= 4;
- } else
- assert(0 && "Not yet handled!");
- } else if ((NumClasses-i) == 2) {
- if (Class[i+1] == X86_64_SSEUP_CLASS) {
- const Type *Ty = ConvertType(TreeType);
- if (const StructType *STy = dyn_cast<StructType>(Ty))
- // Look pass the struct wrapper.
- if (STy->getNumElements() == 1)
- Ty = STy->getElementType(0);
- if (const VectorType *VTy = dyn_cast<VectorType>(Ty)) {
- if (VTy->getNumElements() == 2) {
- if (VTy->getElementType()->isIntegerTy())
- Elts.push_back(VectorType::get(Type::getInt64Ty(Context), 2));
- else
- Elts.push_back(VectorType::get(Type::getDoubleTy(Context), 2));
- Bytes -= 8;
- } else {
- assert(VTy->getNumElements() == 4);
- if (VTy->getElementType()->isIntegerTy())
- Elts.push_back(VectorType::get(Type::getInt32Ty(Context), 4));
- else
- Elts.push_back(VectorType::get(Type::getFloatTy(Context), 4));
- Bytes -= 4;
- }
- } else if (llvm_x86_is_all_integer_types(Ty)) {
- Elts.push_back(VectorType::get(Type::getInt32Ty(Context), 4));
- Bytes -= 4;
- } else {
- Elts.push_back(VectorType::get(Type::getFloatTy(Context), 4));
- Bytes -= 4;
- }
- } else if (Class[i+1] == X86_64_SSESF_CLASS) {
- assert(Bytes == 12 && "Not yet handled!");
- Elts.push_back(Type::getDoubleTy(Context));
- Elts.push_back(Type::getFloatTy(Context));
- Bytes -= 12;
- } else if (Class[i+1] == X86_64_SSE_CLASS) {
- Elts.push_back(Type::getDoubleTy(Context));
- Elts.push_back(Type::getDoubleTy(Context));
- Bytes -= 16;
- } else if (Class[i+1] == X86_64_SSEDF_CLASS && Bytes == 16) {
- Elts.push_back(VectorType::get(Type::getFloatTy(Context), 2));
- Elts.push_back(Type::getDoubleTy(Context));
- } else if (Class[i+1] == X86_64_INTEGER_CLASS) {
- Elts.push_back(VectorType::get(Type::getFloatTy(Context), 2));
- Elts.push_back(Type::getInt64Ty(Context));
- } else if (Class[i+1] == X86_64_NO_CLASS) {
- Elts.push_back(Type::getDoubleTy(Context));
- Bytes -= 16;
- } else {
- assert(0 && "Not yet handled!");
- }
- ++i; // Already handled the next one.
- } else
- assert(0 && "Not yet handled!");
- break;
- case X86_64_SSESF_CLASS:
- Elts.push_back(Type::getFloatTy(Context));
- Bytes -= 4;
- break;
- case X86_64_SSEDF_CLASS:
- Elts.push_back(Type::getDoubleTy(Context));
- Bytes -= 8;
- break;
- case X86_64_X87_CLASS:
- case X86_64_X87UP_CLASS:
- case X86_64_COMPLEX_X87_CLASS:
- Elts.push_back(Type::getX86_FP80Ty(Context));
- break;
- case X86_64_NO_CLASS:
- // padding bytes.
- Elts.push_back(Type::getInt64Ty(Context));
- break;
- default: assert(0 && "Unexpected register class!");
- }
- }
-}
-
-// Return LLVM Type if TYPE can be returned as an aggregate,
-// otherwise return NULL.
-const Type *llvm_x86_aggr_type_for_struct_return(tree type) {
- const Type *Ty = ConvertType(type);
- if (!llvm_suitable_multiple_ret_value_type(Ty, type))
- return NULL;
-
- const StructType *STy = cast<StructType>(Ty);
- std::vector<const Type *> ElementTypes;
-
- // Special handling for _Complex.
- if (llvm_x86_should_not_return_complex_in_memory(type)) {
- ElementTypes.push_back(Type::getX86_FP80Ty(Context));
- ElementTypes.push_back(Type::getX86_FP80Ty(Context));
- return StructType::get(Context, ElementTypes, STy->isPacked());
- }
-
- std::vector<const Type*> GCCElts;
- llvm_x86_64_get_multiple_return_reg_classes(type, Ty, GCCElts);
- return StructType::get(Context, GCCElts, false);
-}
-
-// llvm_x86_extract_mrv_array_element - Helper function that help extract
-// an array element from multiple return value.
-//
-// Here, SRC is returning multiple values. DEST's DESTFIELNO field is an array.
-// Extract SRCFIELDNO's ELEMENO value and store it in DEST's FIELDNO field's
-// ELEMENTNO.
-//
-static void llvm_x86_extract_mrv_array_element(Value *Src, Value *Dest,
- unsigned SrcFieldNo,
- unsigned SrcElemNo,
- unsigned DestFieldNo,
- unsigned DestElemNo,
- LLVMBuilder &Builder,
- bool isVolatile) {
- Value *EVI = Builder.CreateExtractValue(Src, SrcFieldNo, "mrv_gr");
- const StructType *STy = cast<StructType>(Src->getType());
- llvm::Value *Idxs[3];
- Idxs[0] = ConstantInt::get(llvm::Type::getInt32Ty(Context), 0);
- Idxs[1] = ConstantInt::get(llvm::Type::getInt32Ty(Context), DestFieldNo);
- Idxs[2] = ConstantInt::get(llvm::Type::getInt32Ty(Context), DestElemNo);
- Value *GEP = Builder.CreateGEP(Dest, Idxs, Idxs+3, "mrv_gep");
- if (STy->getElementType(SrcFieldNo)->isVectorTy()) {
- Value *ElemIndex = ConstantInt::get(Type::getInt32Ty(Context), SrcElemNo);
- Value *EVIElem = Builder.CreateExtractElement(EVI, ElemIndex, "mrv");
- Builder.CreateStore(EVIElem, GEP, isVolatile);
- } else {
- Builder.CreateStore(EVI, GEP, isVolatile);
- }
-}
-
-// llvm_x86_extract_multiple_return_value - Extract multiple values returned
-// by SRC and store them in DEST. It is expected thaty SRC and
-// DEST types are StructType, but they may not match.
-void llvm_x86_extract_multiple_return_value(Value *Src, Value *Dest,
- bool isVolatile,
- LLVMBuilder &Builder) {
-
- const StructType *STy = cast<StructType>(Src->getType());
- unsigned NumElements = STy->getNumElements();
-
- const PointerType *PTy = cast<PointerType>(Dest->getType());
- const StructType *DestTy = cast<StructType>(PTy->getElementType());
-
- unsigned SNO = 0;
- unsigned DNO = 0;
-
- if (DestTy->getNumElements() == 3
- && DestTy->getElementType(0)->getTypeID() == Type::FloatTyID
- && DestTy->getElementType(1)->getTypeID() == Type::FloatTyID
- && DestTy->getElementType(2)->getTypeID() == Type::FloatTyID) {
- // DestTy is { float, float, float }
- // STy is { <4 x float>, float > }
-
- Value *EVI = Builder.CreateExtractValue(Src, 0, "mrv_gr");
-
- Value *E0Index = ConstantInt::get(Type::getInt32Ty(Context), 0);
- Value *EVI0 = Builder.CreateExtractElement(EVI, E0Index, "mrv.v");
- Value *GEP0 = Builder.CreateStructGEP(Dest, 0, "mrv_gep");
- Builder.CreateStore(EVI0, GEP0, isVolatile);
-
- Value *E1Index = ConstantInt::get(Type::getInt32Ty(Context), 1);
- Value *EVI1 = Builder.CreateExtractElement(EVI, E1Index, "mrv.v");
- Value *GEP1 = Builder.CreateStructGEP(Dest, 1, "mrv_gep");
- Builder.CreateStore(EVI1, GEP1, isVolatile);
-
- Value *GEP2 = Builder.CreateStructGEP(Dest, 2, "mrv_gep");
- Value *EVI2 = Builder.CreateExtractValue(Src, 1, "mrv_gr");
- Builder.CreateStore(EVI2, GEP2, isVolatile);
- return;
- }
-
- while (SNO < NumElements) {
-
- const Type *DestElemType = DestTy->getElementType(DNO);
-
- // Directly access first class values using getresult.
- if (DestElemType->isSingleValueType()) {
- Value *GEP = Builder.CreateStructGEP(Dest, DNO, "mrv_gep");
- Value *EVI = Builder.CreateExtractValue(Src, SNO, "mrv_gr");
- Builder.CreateStore(EVI, GEP, isVolatile);
- ++DNO; ++SNO;
- continue;
- }
-
- // Special treatement for _Complex.
- if (DestElemType->isStructTy()) {
- llvm::Value *Idxs[3];
- Idxs[0] = ConstantInt::get(llvm::Type::getInt32Ty(Context), 0);
- Idxs[1] = ConstantInt::get(llvm::Type::getInt32Ty(Context), DNO);
-
- Idxs[2] = ConstantInt::get(llvm::Type::getInt32Ty(Context), 0);
- Value *GEP = Builder.CreateGEP(Dest, Idxs, Idxs+3, "mrv_gep");
- Value *EVI = Builder.CreateExtractValue(Src, 0, "mrv_gr");
- Builder.CreateStore(EVI, GEP, isVolatile);
- ++SNO;
-
- Idxs[2] = ConstantInt::get(llvm::Type::getInt32Ty(Context), 1);
- GEP = Builder.CreateGEP(Dest, Idxs, Idxs+3, "mrv_gep");
- EVI = Builder.CreateExtractValue(Src, 1, "mrv_gr");
- Builder.CreateStore(EVI, GEP, isVolatile);
- ++DNO; ++SNO;
- continue;
- }
-
- // Access array elements individually. Note, Src and Dest type may
- // not match. For example { <2 x float>, float } and { float[3]; }
- const ArrayType *ATy = cast<ArrayType>(DestElemType);
- unsigned ArraySize = ATy->getNumElements();
- unsigned DElemNo = 0; // DestTy's DNO field's element number
- while (DElemNo < ArraySize) {
- unsigned i = 0;
- unsigned Size = 1;
-
- if (const VectorType *SElemTy =
- dyn_cast<VectorType>(STy->getElementType(SNO))) {
- Size = SElemTy->getNumElements();
- if (SElemTy->getElementType()->getTypeID() == Type::FloatTyID
- && Size == 4)
- // Ignore last two <4 x float> elements.
- Size = 2;
- }
- while (i < Size) {
- llvm_x86_extract_mrv_array_element(Src, Dest, SNO, i++,
- DNO, DElemNo++,
- Builder, isVolatile);
- }
- // Consumed this src field. Try next one.
- ++SNO;
- }
- // Finished building current dest field.
- ++DNO;
- }
-}
-
-/// llvm_x86_should_pass_aggregate_in_integer_regs - x86-32 is same as the
-/// default. x86-64 detects the case where a type is 16 bytes long but
-/// only 8 of them are passed, the rest being padding (*size is set to 8
-/// to identify this case). It also pads out the size to that of a full
-/// register. This means we'll be loading bytes off the end of the object
-/// in some cases. That's what gcc does, so it must be OK, right? Right?
-bool llvm_x86_should_pass_aggregate_in_integer_regs(tree type, unsigned *size,
- bool *DontCheckAlignment) {
- *size = 0;
- if (TARGET_64BIT) {
- enum x86_64_reg_class Class[MAX_CLASSES];
- enum machine_mode Mode = type_natural_mode(type, NULL);
- int NumClasses = classify_argument(Mode, type, Class, 0);
- *DontCheckAlignment= true;
- if (NumClasses == 1 && (Class[0] == X86_64_INTEGER_CLASS ||
- Class[0] == X86_64_INTEGERSI_CLASS)) {
- // one int register
- HOST_WIDE_INT Bytes =
- (Mode == BLKmode) ? int_size_in_bytes(type) : (int) GET_MODE_SIZE(Mode);
- if (Bytes>4)
- *size = 8;
- else if (Bytes>2)
- *size = 4;
- else
- *size = Bytes;
- return true;
- }
- if (NumClasses == 2 && (Class[0] == X86_64_INTEGERSI_CLASS ||
- Class[0] == X86_64_INTEGER_CLASS)) {
- if (Class[1] == X86_64_INTEGER_CLASS) {
- // 16 byte object, 2 int registers
- *size = 16;
- return true;
- }
- // IntegerSI can occur only as element 0.
- if (Class[1] == X86_64_NO_CLASS) {
- // 16 byte object, only 1st register has information
- *size = 8;
- return true;
- }
- }
- return false;
- }
- else
- return !isSingleElementStructOrArray(type, false, true);
-}
Removed: dragonegg/trunk/x86/llvm-target.h
URL: http://llvm.org/viewvc/llvm-project/dragonegg/trunk/x86/llvm-target.h?rev=127407&view=auto
==============================================================================
--- dragonegg/trunk/x86/llvm-target.h (original)
+++ dragonegg/trunk/x86/llvm-target.h (removed)
@@ -1,403 +0,0 @@
-//==-- llvm-target.h - Target hooks for GCC to LLVM conversion ---*- C++ -*-==//
-//
-// Copyright (C) 2007, 2008, 2009, 2010, 2011 Anton Korobeynikov, Duncan Sands
-// et al.
-//
-// This file is part of DragonEgg.
-//
-// DragonEgg is free software; you can redistribute it and/or modify it under
-// the terms of the GNU General Public License as published by the Free Software
-// Foundation; either version 2, or (at your option) any later version.
-//
-// DragonEgg is distributed in the hope that it will be useful, but WITHOUT ANY
-// WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
-// A PARTICULAR PURPOSE. See the GNU General Public License for more details.
-//
-// You should have received a copy of the GNU General Public License along with
-// DragonEgg; see the file COPYING. If not, write to the Free Software
-// Foundation, 51 Franklin Street, Suite 500, Boston, MA 02110-1335, USA.
-//
-//===----------------------------------------------------------------------===//
-// This file declares some target-specific hooks for GCC to LLVM conversion.
-//===----------------------------------------------------------------------===//
-
-#ifndef LLVM_TARGET_H
-#define LLVM_TARGET_H
-
-/* LLVM specific stuff for supporting calling convention output */
-#define TARGET_ADJUST_LLVM_CC(CC, type) \
- { \
- tree_node *type_attributes = TYPE_ATTRIBUTES (type); \
- if (lookup_attribute ("stdcall", type_attributes)) { \
- CC = CallingConv::X86_StdCall; \
- } else if (lookup_attribute("fastcall", type_attributes)) { \
- CC = CallingConv::X86_FastCall; \
- } \
- }
-
-#define TARGET_ADJUST_LLVM_RETATTR(Rattributes, type) \
- { \
- tree_node *type_attributes = TYPE_ATTRIBUTES (type); \
- if (!TARGET_64BIT && (TARGET_SSEREGPARM || \
- lookup_attribute("sseregparm", type_attributes)))\
- RAttributes |= Attribute::InReg; \
- }
-
-/* LLVM specific stuff for converting gcc's `regparm` attribute to LLVM's
- `inreg` parameter attribute */
-#define LLVM_TARGET_ENABLE_REGPARM
-
-extern "C" int ix86_regparm;
-
-#define LLVM_TARGET_INIT_REGPARM(local_regparm, local_fp_regparm, type) \
- { \
- tree_node *attr; \
- local_regparm = ix86_regparm; \
- local_fp_regparm = TARGET_SSEREGPARM ? 3 : 0; \
- attr = lookup_attribute ("regparm", \
- TYPE_ATTRIBUTES (type)); \
- if (attr) { \
- local_regparm = TREE_INT_CST_LOW (TREE_VALUE \
- (TREE_VALUE (attr))); \
- } \
- attr = lookup_attribute("sseregparm", \
- TYPE_ATTRIBUTES (type)); \
- if (attr) \
- local_fp_regparm = 3; \
- }
-
-#define LLVM_ADJUST_REGPARM_ATTRIBUTE(PAttribute, Type, Size, \
- local_regparm, \
- local_fp_regparm) \
- { \
- if (!TARGET_64BIT) { \
- if (TREE_CODE(Type) == REAL_TYPE && \
- (TYPE_PRECISION(Type)==32 || \
- TYPE_PRECISION(Type)==64)) { \
- local_fp_regparm -= 1; \
- if (local_fp_regparm >= 0) \
- PAttribute |= Attribute::InReg; \
- else \
- local_fp_regparm = 0; \
- } else if (INTEGRAL_TYPE_P(Type) || \
- POINTER_TYPE_P(Type)) { \
- int words = \
- (Size + BITS_PER_WORD - 1) / BITS_PER_WORD; \
- local_regparm -= words; \
- if (local_regparm>=0) \
- PAttribute |= Attribute::InReg; \
- else \
- local_regparm = 0; \
- } \
- } \
- }
-
-#define LLVM_SET_RED_ZONE_FLAG(disable_red_zone) \
- if (TARGET_64BIT && TARGET_NO_RED_ZONE) \
- disable_red_zone = 1;
-
-#ifdef LLVM_ABI_H
-
-/* On x86-32 objects containing SSE vectors are 16 byte aligned, everything
- else 4. On x86-64 vectors are 8-byte aligned, everything else can
- be figured out by the back end. */
-#define LLVM_BYVAL_ALIGNMENT(T) \
- (TYPE_ALIGN(T) / 8)
-
-extern tree_node *llvm_x86_should_return_selt_struct_as_scalar(tree_node *);
-
-/* Structs containing a single data field plus zero-length fields are
- considered as if they were the type of the data field. On x86-64,
- if the element type is an MMX vector, return it as double (which will
- get it into XMM0). */
-
-#define LLVM_SHOULD_RETURN_SELT_STRUCT_AS_SCALAR(X) \
- llvm_x86_should_return_selt_struct_as_scalar((X))
-
-extern bool llvm_x86_should_pass_aggregate_in_integer_regs(tree_node *,
- unsigned*, bool*);
-
-/* LLVM_SHOULD_PASS_AGGREGATE_IN_INTEGER_REGS - Return true if this aggregate
- value should be passed in integer registers. This differs from the usual
- handling in that x86-64 passes 128-bit structs and unions which only
- contain data in the first 64 bits, as 64-bit objects. (These can be
- created by abusing __attribute__((aligned)). */
-#define LLVM_SHOULD_PASS_AGGREGATE_IN_INTEGER_REGS(X, Y, Z) \
- llvm_x86_should_pass_aggregate_in_integer_regs((X), (Y), (Z))
-
-extern const Type *llvm_x86_scalar_type_for_struct_return(tree_node *type,
- unsigned *Offset);
-
-/* LLVM_SCALAR_TYPE_FOR_STRUCT_RETURN - Return LLVM Type if X can be
- returned as a scalar, otherwise return NULL. */
-#define LLVM_SCALAR_TYPE_FOR_STRUCT_RETURN(X, Y) \
- llvm_x86_scalar_type_for_struct_return((X), (Y))
-
-extern const Type *llvm_x86_aggr_type_for_struct_return(tree_node *type);
-
-/* LLVM_AGGR_TYPE_FOR_STRUCT_RETURN - Return LLVM Type if X can be
- returned as an aggregate, otherwise return NULL. */
-#define LLVM_AGGR_TYPE_FOR_STRUCT_RETURN(X, CC) \
- llvm_x86_aggr_type_for_struct_return(X)
-
-extern void llvm_x86_extract_multiple_return_value(Value *Src, Value *Dest,
- bool isVolatile,
- LLVMBuilder &B);
-
-/* LLVM_EXTRACT_MULTIPLE_RETURN_VALUE - Extract multiple return value from
- SRC and assign it to DEST. */
-#define LLVM_EXTRACT_MULTIPLE_RETURN_VALUE(Src,Dest,V,B) \
- llvm_x86_extract_multiple_return_value((Src),(Dest),(V),(B))
-
-extern bool llvm_x86_should_pass_vector_using_byval_attr(tree_node *);
-
-/* On x86-64, vectors which are not MMX nor SSE should be passed byval. */
-#define LLVM_SHOULD_PASS_VECTOR_USING_BYVAL_ATTR(X) \
- llvm_x86_should_pass_vector_using_byval_attr((X))
-
-extern bool llvm_x86_should_pass_vector_in_integer_regs(tree_node *);
-
-/* On x86-32, vectors which are not MMX nor SSE should be passed as integers. */
-#define LLVM_SHOULD_PASS_VECTOR_IN_INTEGER_REGS(X) \
- llvm_x86_should_pass_vector_in_integer_regs((X))
-
-extern tree_node *llvm_x86_should_return_vector_as_scalar(tree_node *, bool);
-
-/* The MMX vector v1i64 is returned in EAX and EDX on Darwin. Communicate
- this by returning i64 here. Likewise, (generic) vectors such as v2i16
- are returned in EAX.
- On Darwin x86-64, MMX vectors are returned in XMM0. Communicate this by
- returning f64. */
-#define LLVM_SHOULD_RETURN_VECTOR_AS_SCALAR(X,isBuiltin)\
- llvm_x86_should_return_vector_as_scalar((X), (isBuiltin))
-
-extern bool llvm_x86_should_return_vector_as_shadow(tree_node *, bool);
-
-/* MMX vectors v2i32, v4i16, v8i8, v2f32 are returned using sret on Darwin
- 32-bit. Vectors bigger than 128 are returned using sret. */
-#define LLVM_SHOULD_RETURN_VECTOR_AS_SHADOW(X,isBuiltin)\
- llvm_x86_should_return_vector_as_shadow((X),(isBuiltin))
-
-extern bool
-llvm_x86_should_not_return_complex_in_memory(tree_node *type);
-
-/* LLVM_SHOULD_NOT_RETURN_COMPLEX_IN_MEMORY - A hook to allow
- special _Complex handling. Return true if X should be returned using
- multiple value return instruction. */
-#define LLVM_SHOULD_NOT_RETURN_COMPLEX_IN_MEMORY(X) \
- llvm_x86_should_not_return_complex_in_memory((X))
-
-extern bool
-llvm_x86_should_pass_aggregate_as_fca(tree_node *type, const Type *);
-
-/* LLVM_SHOULD_PASS_AGGREGATE_AS_FCA - Return true if an aggregate of the
- specified type should be passed as a first-class aggregate. */
-#ifndef LLVM_SHOULD_PASS_AGGREGATE_AS_FCA
-#define LLVM_SHOULD_PASS_AGGREGATE_AS_FCA(X, TY) \
- llvm_x86_should_pass_aggregate_as_fca(X, TY)
-#endif
-
-extern bool llvm_x86_should_pass_aggregate_in_memory(tree_node *, const Type *);
-
-#define LLVM_SHOULD_PASS_AGGREGATE_USING_BYVAL_ATTR(X, TY) \
- llvm_x86_should_pass_aggregate_in_memory(X, TY)
-
-
-extern bool
-llvm_x86_64_should_pass_aggregate_in_mixed_regs(tree_node *, const Type *Ty,
- std::vector<const Type*>&);
-extern bool
-llvm_x86_32_should_pass_aggregate_in_mixed_regs(tree_node *, const Type *Ty,
- std::vector<const Type*>&);
-
-#define LLVM_SHOULD_PASS_AGGREGATE_IN_MIXED_REGS(T, TY, CC, E) \
- (TARGET_64BIT ? \
- llvm_x86_64_should_pass_aggregate_in_mixed_regs((T), (TY), (E)) : \
- llvm_x86_32_should_pass_aggregate_in_mixed_regs((T), (TY), (E)))
-
-extern
-bool llvm_x86_64_aggregate_partially_passed_in_regs(std::vector<const Type*>&,
- std::vector<const Type*>&,
- bool);
-
-#define LLVM_AGGREGATE_PARTIALLY_PASSED_IN_REGS(E, SE, ISR, CC) \
- (TARGET_64BIT ? \
- llvm_x86_64_aggregate_partially_passed_in_regs((E), (SE), (ISR)) : \
- false)
-
-#endif /* LLVM_ABI_H */
-
-/* Register class used for passing given 64bit part of the argument.
- These represent classes as documented by the PS ABI, with the exception
- of SSESF, SSEDF classes, that are basically SSE class, just gcc will
- use SF or DFmode move instead of DImode to avoid reformatting penalties.
-
- Similarly we play games with INTEGERSI_CLASS to use cheaper SImode moves
- whenever possible (upper half does contain padding).
- */
-enum x86_64_reg_class
- {
- X86_64_NO_CLASS,
- X86_64_INTEGER_CLASS,
- X86_64_INTEGERSI_CLASS,
- X86_64_SSE_CLASS,
- X86_64_SSESF_CLASS,
- X86_64_SSEDF_CLASS,
- X86_64_SSEUP_CLASS,
- X86_64_X87_CLASS,
- X86_64_X87UP_CLASS,
- X86_64_COMPLEX_X87_CLASS,
- X86_64_MEMORY_CLASS
- };
-
-/* LLVM_TARGET_INTRINSIC_PREFIX - Specify what prefix this target uses for its
- * intrinsics.
- */
-#define LLVM_TARGET_INTRINSIC_PREFIX "x86"
-
-/* LLVM_TARGET_NAME - This specifies the name of the target, which correlates to
- * the llvm::InitializeXXXTarget() function.
- */
-#define LLVM_TARGET_NAME X86
-
-/* Turn -march=xx into a CPU type.
- */
-#define LLVM_SET_SUBTARGET_FEATURES(F) \
- { if (TARGET_MACHO && ! strcmp (ix86_arch_string, "apple")) \
- F.setCPU(TARGET_64BIT ? "core2" : "yonah"); \
- else \
- F.setCPU(ix86_arch_string); \
- \
- if (TARGET_64BIT) \
- F.AddFeature("64bit"); \
- else if (target_flags_explicit & OPTION_MASK_ISA_64BIT) \
- F.AddFeature("64bit", false); \
- \
- if (TARGET_MMX) \
- F.AddFeature("mmx"); \
- else if (target_flags_explicit & OPTION_MASK_ISA_MMX) \
- F.AddFeature("mmx", false); \
- \
- if (TARGET_3DNOW) \
- F.AddFeature("3dnow"); \
- else if (target_flags_explicit & OPTION_MASK_ISA_3DNOW) \
- F.AddFeature("3dnow", false); \
- \
- if (TARGET_3DNOW_A) \
- F.AddFeature("3dnowa"); \
- else if (target_flags_explicit & OPTION_MASK_ISA_3DNOW_A) \
- F.AddFeature("3dnowa", false); \
- \
- if (TARGET_SSE) \
- F.AddFeature("sse"); \
- else if (target_flags_explicit & OPTION_MASK_ISA_SSE) \
- F.AddFeature("sse", false); \
- \
- if (TARGET_SSE2) \
- F.AddFeature("sse2"); \
- else if (target_flags_explicit & OPTION_MASK_ISA_SSE2) \
- F.AddFeature("sse2", false); \
- \
- if (TARGET_SSE3) \
- F.AddFeature("sse3"); \
- else if (target_flags_explicit & OPTION_MASK_ISA_SSE3) \
- F.AddFeature("sse3", false); \
- \
- if (TARGET_SSSE3) \
- F.AddFeature("ssse3"); \
- else if (target_flags_explicit & OPTION_MASK_ISA_SSSE3) \
- F.AddFeature("ssse3", false); \
- \
- if (TARGET_SSE4_1) \
- F.AddFeature("sse41"); \
- else if (target_flags_explicit & OPTION_MASK_ISA_SSE4_1) \
- F.AddFeature("sse41", false); \
- \
- if (TARGET_SSE4_2) \
- F.AddFeature("sse42"); \
- else if (target_flags_explicit & OPTION_MASK_ISA_SSE4_2) \
- F.AddFeature("sse42", false); \
- \
- if (TARGET_AVX) \
- F.AddFeature("avx"); \
- else if (target_flags_explicit & OPTION_MASK_ISA_AVX) \
- F.AddFeature("avx", false); \
- \
- if (TARGET_FMA) \
- F.AddFeature("fma3"); \
- else if (target_flags_explicit & OPTION_MASK_ISA_FMA) \
- F.AddFeature("fma3", false); \
- \
- if (TARGET_SSE4A) \
- F.AddFeature("sse4a"); \
- else if (target_flags_explicit & OPTION_MASK_ISA_SSE4A) \
- F.AddFeature("sse4a", false); \
- }
-
-#define LLVM_SET_IMPLICIT_FLOAT(flag_no_implicit_float) \
- if (!TARGET_80387) \
- flag_no_implicit_float = 1; \
- else \
- flag_no_implicit_float = 0;
-
-/* LLVM ABI definition macros. */
-
-/* When -m64 is specified, set the architecture to x86_64-os-blah even if the
- * compiler was configured for i[3456]86-os-blah.
- */
-#define LLVM_OVERRIDE_TARGET_ARCH() \
- (TARGET_64BIT ? "x86_64" : "i386")
-
-/* LLVM_TARGET_INTRINSIC_LOWER - To handle builtins, we want to expand the
- * invocation into normal LLVM code. If the target can handle the builtin, this
- * macro should call the target TreeToLLVM::TargetIntrinsicLower method and
- * return true.This macro is invoked from a method in the TreeToLLVM class.
- */
-#define LLVM_TARGET_INTRINSIC_LOWER(STMT, FNDECL, DESTLOC, RESULT, DESTTY, OPS) \
- TargetIntrinsicLower(STMT, FNDECL, DESTLOC, RESULT, DESTTY, OPS);
-
-/* LLVM_GET_REG_NAME - When extracting a register name for a constraint, use
- the string extracted from the magic symbol built for that register, rather
- than reg_names. The latter maps both AH and AL to the same thing, which
- means we can't distinguish them. */
-#define LLVM_GET_REG_NAME(REG_NAME, REG_NUM) __extension__ \
- ({ const char *nm = (REG_NAME); \
- if (nm && (*nm == '%' || *nm == '#')) ++nm; \
- ((!nm || ISDIGIT (*nm)) ? reg_names[REG_NUM] : nm); })
-
-/* LLVM_CANONICAL_ADDRESS_CONSTRAINTS - Valid x86 memory addresses include
- symbolic values and immediates. Canonicalize GCC's "p" constraint for
- memory addresses to allow both memory and immediate operands. */
-#define LLVM_CANONICAL_ADDRESS_CONSTRAINTS "im"
-
-/* Propagate code model setting to backend */
-#define LLVM_SET_MACHINE_OPTIONS(argvec) \
- do { \
- switch (ix86_cmodel) { \
- default: \
- sorry ("code model %<%s%> not supported yet", \
- ix86_cmodel_string); \
- break; \
- case CM_SMALL: \
- case CM_SMALL_PIC: \
- argvec.push_back("--code-model=small"); \
- break; \
- case CM_KERNEL: \
- argvec.push_back("--code-model=kernel"); \
- break; \
- case CM_MEDIUM: \
- case CM_MEDIUM_PIC: \
- argvec.push_back("--code-model=medium"); \
- break; \
- case CM_32: \
- argvec.push_back("--code-model=default"); \
- break; \
- } \
- if (TARGET_OMIT_LEAF_FRAME_POINTER) \
- argvec.push_back("--disable-non-leaf-fp-elim"); \
- \
- if (ix86_force_align_arg_pointer) \
- argvec.push_back("-force-align-stack"); \
- } while (0)
-
-#endif /* LLVM_TARGET_H */
More information about the llvm-commits
mailing list