[llvm-commits] [llvm-gcc-4.2] r43913 [3/80] - in /llvm-gcc-4.2/trunk: boehm-gc/ boehm-gc/Mac_files/ boehm-gc/cord/ boehm-gc/doc/ boehm-gc/include/ boehm-gc/include/private/ boehm-gc/tests/ libffi/ libffi/include/ libffi/src/ libffi/src/alpha/ libffi/src/arm/ libffi/src/cris/ libffi/src/frv/ libffi/src/ia64/ libffi/src/m32r/ libffi/src/m68k/ libffi/src/mips/ libffi/src/pa/ libffi/src/powerpc/ libffi/src/s390/ libffi/src/sh/ libffi/src/sh64/ libffi/src/sparc/ libffi/src/x86/ libffi/testsuite/ libffi/testsuite/config/ lib...
Bill Wendling
isanbard at gmail.com
Thu Nov 8 14:57:11 PST 2007
Added: llvm-gcc-4.2/trunk/boehm-gc/doc/gcinterface.html
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/doc/gcinterface.html?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/doc/gcinterface.html (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/doc/gcinterface.html Thu Nov 8 16:56:19 2007
@@ -0,0 +1,248 @@
+<!DOCTYPE HTML>
+<HEAD>
+<TITLE>Garbage Collector Interface</TITLE>
+</HEAD>
+<BODY>
+<H1>C Interface</h1>
+On many platforms, a single-threaded garbage collector library can be built
+to act as a plug-in malloc replacement.
+(Build with <TT>-DREDIRECT_MALLOC=GC_malloc -DIGNORE_FREE</tt>.)
+This is often the best way to deal with third-party libraries
+which leak or prematurely free objects. <TT>-DREDIRECT_MALLOC</tt> is intended
+primarily as an easy way to adapt old code, not for new development.
+<P>
+New code should use the interface discussed below.
+<P>
+Code must be linked against the GC library. On most UNIX platforms,
+depending on how the collector is built, this will be <TT>gc.a</tt>
+or <TT>libgc.{a,so}</tt>.
+<P>
+The following describes the standard C interface to the garbage collector.
+It is not a complete definition of the interface. It describes only the
+most commonly used functionality, approximately in decreasing order of
+frequency of use.
+The full interface is described in
+<A HREF="http://hpl.hp.com/personal/Hans_Boehm/gc/gc_source/gch.txt">gc.h</a>
+or <TT>gc.h</tt> in the distribution.
+<P>
+Clients should include <TT>gc.h</tt>.
+<P>
+In the case of multithreaded code,
+<TT>gc.h</tt> should be included after the threads header file, and
+after defining the appropriate <TT>GC_</tt><I>XXXX</i><TT>_THREADS</tt> macro.
+(For 6.2alpha4 and later, simply defining <TT>GC_THREADS</tt> should suffice.)
+The header file <TT>gc.h</tt> must be included
+in files that use either GC or threads primitives, since threads primitives
+will be redefined to cooperate with the GC on many platforms.
+<DL>
+<DT> <B>void * GC_MALLOC(size_t <I>nbytes</i>)</b>
+<DD>
+Allocates and clears <I>nbytes</i> of storage.
+Requires (amortized) time proportional to <I>nbytes</i>.
+The resulting object will be automatically deallocated when unreferenced.
+References from objects allocated with the system malloc are usually not
+considered by the collector. (See <TT>GC_MALLOC_UNCOLLECTABLE</tt>, however.)
+<TT>GC_MALLOC</tt> is a macro which invokes <TT>GC_malloc</tt> by default or,
+if <TT>GC_DEBUG</tt>
+is defined before <TT>gc.h</tt> is included, a debugging version that checks
+occasionally for overwrite errors, and the like.
+<DT> <B>void * GC_MALLOC_ATOMIC(size_t <I>nbytes</i>)</b>
+<DD>
+Allocates <I>nbytes</i> of storage.
+Requires (amortized) time proportional to <I>nbytes</i>.
+The resulting object will be automatically deallocated when unreferenced.
+The client promises that the resulting object will never contain any pointers.
+The memory is not cleared.
+This is the preferred way to allocate strings, floating point arrays,
+bitmaps, etc.
+More precise information about pointer locations can be communicated to the
+collector using the interface in
+<A HREF="http://www.hpl.hp.com/personal/Hans_Boehm/gc/gc_source/gc_typedh.txt">gc_typed.h</a> in the distribution.
+<DT> <B>void * GC_MALLOC_UNCOLLECTABLE(size_t <I>nbytes</i>)</b>
+<DD>
+Identical to <TT>GC_MALLOC</tt>,
+except that the resulting object is not automatically
+deallocated. Unlike the system-provided malloc, the collector does
+scan the object for pointers to garbage-collectable memory, even if the
+block itself does not appear to be reachable. (Objects allocated in this way
+are effectively treated as roots by the collector.)
+<DT> <B> void * GC_REALLOC(void *<I>old</i>, size_t <I>new_size</i>) </b>
+<DD>
+Allocate a new object of the indicated size and copy (a prefix of) the
+old object into the new object. The old object is reused in place if
+convenient. If the original object was allocated with
+<TT>GC_MALLOC_ATOMIC</tt>,
+the new object is subject to the same constraints. If it was allocated
+as an uncollectable object, then the new object is uncollectable, and
+the old object (if different) is deallocated.
+<DT> <B> void GC_FREE(void *<I>dead</i>) </b>
+<DD>
+Explicitly deallocate an object. Typically not useful for small
+collectable objects.
+<DT> <B> void * GC_MALLOC_IGNORE_OFF_PAGE(size_t <I>nbytes</i>) </b>
+<DD>
+<DT> <B> void * GC_MALLOC_ATOMIC_IGNORE_OFF_PAGE(size_t <I>nbytes</i>) </b>
+<DD>
+Analogous to <TT>GC_MALLOC</tt> and <TT>GC_MALLOC_ATOMIC</tt>,
+except that the client
+guarantees that as long
+as the resulting object is of use, a pointer is maintained to someplace
+inside the first 512 bytes of the object. This pointer should be declared
+volatile to avoid interference from compiler optimizations.
+(Other nonvolatile pointers to the object may exist as well.)
+This is the
+preferred way to allocate objects that are likely to be > 100KBytes in size.
+It greatly reduces the risk that such objects will be accidentally retained
+when they are no longer needed. Thus space usage may be significantly reduced.
+<DT> <B> void GC_INIT(void) </b>
+<DD>
+On some platforms, it is necessary to invoke this
+<I>from the main executable, not from a dynamic library,</i> before
+the initial invocation of a GC routine. It is recommended that this be done
+in portable code, though we try to ensure that it expands to a no-op
+on as many platforms as possible.
+<DT> <B> void GC_gcollect(void) </b>
+<DD>
+Explicitly force a garbage collection.
+<DT> <B> void GC_enable_incremental(void) </b>
+<DD>
+Cause the garbage collector to perform a small amount of work
+every few invocations of <TT>GC_MALLOC</tt> or the like, instead of performing
+an entire collection at once. This is likely to increase total
+running time. It will improve response on a platform that either has
+suitable support in the garbage collector (Linux and most Unix
+versions, win32 if the collector was suitably built) or if "stubborn"
+allocation is used (see
+<A HREF="http://www.hpl.hp.com/personal/Hans_Boehm/gc/gc_source/gch.txt">gc.h</a>).
+On many platforms this interacts poorly with system calls
+that write to the garbage collected heap.
+<DT> <B> GC_warn_proc GC_set_warn_proc(GC_warn_proc <I>p</i>) </b>
+<DD>
+Replace the default procedure used by the collector to print warnings.
+The collector
+may otherwise write to sterr, most commonly because GC_malloc was used
+in a situation in which GC_malloc_ignore_off_page would have been more
+appropriate. See <A HREF="http://www.hpl.hp.com/personal/Hans_Boehm/gc/gc_source/gch.txt">gc.h</a> for details.
+<DT> <B> void GC_REGISTER_FINALIZER(...) </b>
+<DD>
+Register a function to be called when an object becomes inaccessible.
+This is often useful as a backup method for releasing system resources
+(<I>e.g.</i> closing files) when the object referencing them becomes
+inaccessible.
+It is not an acceptable method to perform actions that must be performed
+in a timely fashion.
+See <A HREF="http://www.hpl.hp.com/personal/Hans_Boehm/gc/gc_source/gch.txt">gc.h</a> for details of the interface.
+See <A HREF="http://www.hpl.hp.com/personal/Hans_Boehm/gc/finalization.html">here</a> for a more detailed discussion
+of the design.
+<P>
+Note that an object may become inaccessible before client code is done
+operating on objects referenced by its fields.
+Suitable synchronization is usually required.
+See <A HREF="http://portal.acm.org/citation.cfm?doid=604131.604153">here</a>
+or <A HREF="http://www.hpl.hp.com/techreports/2002/HPL-2002-335.html">here</a>
+for details.
+</dl>
+<P>
+If you are concerned with multiprocessor performance and scalability,
+you should consider enabling and using thread local allocation (<I>e.g.</i>
+<TT>GC_LOCAL_MALLOC</tt>, see <TT>gc_local_alloc.h</tt>. If your platform
+supports it, you should build the collector with parallel marking support
+(<TT>-DPARALLEL_MARK</tt>, or <TT>--enable-parallel-mark</tt>).
+<P>
+If the collector is used in an environment in which pointer location
+information for heap objects is easily available, this can be passed on
+to the collector using the interfaces in either <TT>gc_typed.h</tt>
+or <TT>gc_gcj.h</tt>.
+<P>
+The collector distribution also includes a <B>string package</b> that takes
+advantage of the collector. For details see
+<A HREF="http://www.hpl.hp.com/personal/Hans_Boehm/gc/gc_source/cordh.txt">cord.h</a>
+
+<H1>C++ Interface</h1>
+Usage of the collector from C++ is complicated by the fact that there
+are many "standard" ways to allocate memory in C++. The default ::new
+operator, default malloc, and default STL allocators allocate memory
+that is not garbage collected, and is not normally "traced" by the
+collector. This means that any pointers in memory allocated by these
+default allocators will not be seen by the collector. Garbage-collectable
+memory referenced only by pointers stored in such default-allocated
+objects is likely to be reclaimed prematurely by the collector.
+<P>
+It is the programmers responsibility to ensure that garbage-collectable
+memory is referenced by pointers stored in one of
+<UL>
+<LI> Program variables
+<LI> Garbage-collected objects
+<LI> Uncollected but "traceable" objects
+</ul>
+"Traceable" objects are not necessarily reclaimed by the collector,
+but are scanned for pointers to collectable objects.
+They are allocated by <TT>GC_MALLOC_UNCOLLECTABLE</tt>, as described
+above, and through some interfaces described below.
+<P>
+The easiest way to ensure that collectable objects are properly referenced
+is to allocate only collectable objects. This requires that every
+allocation go through one of the following interfaces, each one of
+which replaces a standard C++ allocation mechanism:
+<DL>
+<DT> <B> STL allocators </b>
+<DD>
+Users of the <A HREF="http://www.sgi.com/tech/stl">SGI extended STL</a>
+can include <TT>new_gc_alloc.h</tt> before including
+STL header files.
+(<TT>gc_alloc.h</tt> corresponds to now obsolete versions of the
+SGI STL.)
+This defines SGI-style allocators
+<UL>
+<LI> alloc
+<LI> single_client_alloc
+<LI> gc_alloc
+<LI> single_client_gc_alloc
+</ul>
+which may be used either directly to allocate memory or to instantiate
+container templates. The first two allocate uncollectable but traced
+memory, while the second two allocate collectable memory.
+The single_client versions are not safe for concurrent access by
+multiple threads, but are faster.
+<P>
+For an example, click <A HREF="http://hpl.hp.com/personal/Hans_Boehm/gc/gc_alloc_exC.txt">here</a>.
+<P>
+Recent versions of the collector also include a more standard-conforming
+allocator implementation in <TT>gc_allocator.h</tt>. It defines
+<UL>
+<LI> traceable_allocator
+<LI> gc_allocator
+</ul>
+Again the former allocates uncollectable but traced memory.
+This should work with any fully standard-conforming C++ compiler.
+<DT> <B> Class inheritance based interface </b>
+<DD>
+Users may include gc_cpp.h and then cause members of classes to
+be allocated in garbage collectable memory by having those classes
+inherit from class gc.
+For details see <A HREF="http://hpl.hp.com/personal/Hans_Boehm/gc/gc_source/gc_cpph.txt">gc_cpp.h</a>.
+<P>
+Linking against libgccpp in addition to the gc library overrides
+::new (and friends) to allocate traceable memory but uncollectable
+memory, making it safe to refer to collectable objects from the resulting
+memory.
+<DT> <B> C interface </b>
+<DD>
+It is also possible to use the C interface from
+<A HREF="http://hpl.hp.com/personal/Hans_Boehm/gc/gc_source/gch.txt">gc.h</a> directly.
+On platforms which use malloc to implement ::new, it should usually be possible
+to use a version of the collector that has been compiled as a malloc
+replacement. It is also possible to replace ::new and other allocation
+functions suitably, as is done by libgccpp.
+<P>
+Note that user-implemented small-block allocation often works poorly with
+an underlying garbage-collected large block allocator, since the collector
+has to view all objects accessible from the user's free list as reachable.
+This is likely to cause problems if <TT>GC_MALLOC</tt>
+is used with something like
+the original HP version of STL.
+This approach works well with the SGI versions of the STL only if the
+<TT>malloc_alloc</tt> allocator is used.
+</dl>
+</body>
+</html>
Added: llvm-gcc-4.2/trunk/boehm-gc/doc/leak.html
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/doc/leak.html?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/doc/leak.html (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/doc/leak.html Thu Nov 8 16:56:19 2007
@@ -0,0 +1,197 @@
+<HTML>
+<HEAD>
+<TITLE>Using the Garbage Collector as Leak Detector</title>
+</head>
+<BODY>
+<H1>Using the Garbage Collector as Leak Detector</h1>
+The garbage collector may be used as a leak detector.
+In this case, the primary function of the collector is to report
+objects that were allocated (typically with <TT>GC_MALLOC</tt>),
+not deallocated (normally with <TT>GC_FREE</tt>), but are
+no longer accessible. Since the object is no longer accessible,
+there in normally no way to deallocate the object at a later time;
+thus it can safely be assumed that the object has been "leaked".
+<P>
+This is substantially different from counting leak detectors,
+which simply verify that all allocated objects are eventually
+deallocated. A garbage-collector based leak detector can provide
+somewhat more precise information when an object was leaked.
+More importantly, it does not report objects that are never
+deallocated because they are part of "permanent" data structures.
+Thus it does not require all objects to be deallocated at process
+exit time, a potentially useless activity that often triggers
+large amounts of paging.
+<P>
+All non-ancient versions of the garbage collector provide
+leak detection support. Version 5.3 adds the following
+features:
+<OL>
+<LI> Leak detection mode can be initiated at run-time by
+setting GC_find_leak instead of building the collector with FIND_LEAK
+defined. This variable should be set to a nonzero value
+at program startup.
+<LI> Leaked objects should be reported and then correctly garbage collected.
+Prior versions either reported leaks or functioned as a garbage collector.
+</ol>
+For the rest of this description we will give instructions that work
+with any reasonable version of the collector.
+<P>
+To use the collector as a leak detector, follow the following steps:
+<OL>
+<LI> Build the collector with -DFIND_LEAK. Otherwise use default
+build options.
+<LI> Change the program so that all allocation and deallocation goes
+through the garbage collector.
+<LI> Arrange to call <TT>GC_gcollect</tt> at appropriate points to check
+for leaks.
+(For sufficiently long running programs, this will happen implicitly,
+but probably not with sufficient frequency.)
+</ol>
+The second step can usually be accomplished with the
+<TT>-DREDIRECT_MALLOC=GC_malloc</tt> option when the collector is built,
+or by defining <TT>malloc</tt>, <TT>calloc</tt>,
+<TT>realloc</tt> and <TT>free</tt>
+to call the corresponding garbage collector functions.
+But this, by itself, will not yield very informative diagnostics,
+since the collector does not keep track of information about
+how objects were allocated. The error reports will include
+only object addresses.
+<P>
+For more precise error reports, as much of the program as possible
+should use the all uppercase variants of these functions, after
+defining <TT>GC_DEBUG</tt>, and then including <TT>gc.h</tt>.
+In this environment <TT>GC_MALLOC</tt> is a macro which causes
+at least the file name and line number at the allocation point to
+be saved as part of the object. Leak reports will then also include
+this information.
+<P>
+Many collector features (<I>e.g</i> stubborn objects, finalization,
+and disappearing links) are less useful in this context, and are not
+fully supported. Their use will usually generate additional bogus
+leak reports, since the collector itself drops some associated objects.
+<P>
+The same is generally true of thread support. However, as of 6.0alpha4,
+correct leak reports should be generated with linuxthreads.
+<P>
+On a few platforms (currently Solaris/SPARC, Irix, and, with -DSAVE_CALL_CHAIN,
+Linux/X86), <TT>GC_MALLOC</tt>
+also causes some more information about its call stack to be saved
+in the object. Such information is reproduced in the error
+reports in very non-symbolic form, but it can be very useful with the
+aid of a debugger.
+<H2>An Example</h2>
+The following header file <TT>leak_detector.h</tt> is included in the
+"include" subdirectory of the distribution:
+<PRE>
+#define GC_DEBUG
+#include "gc.h"
+#define malloc(n) GC_MALLOC(n)
+#define calloc(m,n) GC_MALLOC((m)*(n))
+#define free(p) GC_FREE(p)
+#define realloc(p,n) GC_REALLOC((p),(n))
+#define CHECK_LEAKS() GC_gcollect()
+</pre>
+<P>
+Assume the collector has been built with -DFIND_LEAK. (For very
+new versions of the collector, we could instead add the statement
+<TT>GC_find_leak = 1</tt> as the first statement in <TT>main</tt>.
+<P>
+The program to be tested for leaks can then look like:
+<PRE>
+#include "leak_detector.h"
+
+main() {
+ int *p[10];
+ int i;
+ /* GC_find_leak = 1; for new collector versions not */
+ /* compiled with -DFIND_LEAK. */
+ for (i = 0; i < 10; ++i) {
+ p[i] = malloc(sizeof(int)+i);
+ }
+ for (i = 1; i < 10; ++i) {
+ free(p[i]);
+ }
+ for (i = 0; i < 9; ++i) {
+ p[i] = malloc(sizeof(int)+i);
+ }
+ CHECK_LEAKS();
+}
+</pre>
+<P>
+On an Intel X86 Linux system this produces on the stderr stream:
+<PRE>
+Leaked composite object at 0x806dff0 (leak_test.c:8, sz=4)
+</pre>
+(On most unmentioned operating systems, the output is similar to this.
+If the collector had been built on Linux/X86 with -DSAVE_CALL_CHAIN,
+the output would be closer to the Solaris example. For this to work,
+the program should not be compiled with -fomit_frame_pointer.)
+<P>
+On Irix it reports
+<PRE>
+Leaked composite object at 0x10040fe0 (leak_test.c:8, sz=4)
+ Caller at allocation:
+ ##PC##= 0x10004910
+</pre>
+and on Solaris the error report is
+<PRE>
+Leaked composite object at 0xef621fc8 (leak_test.c:8, sz=4)
+ Call chain at allocation:
+ args: 4 (0x4), 200656 (0x30FD0)
+ ##PC##= 0x14ADC
+ args: 1 (0x1), -268436012 (0xEFFFFDD4)
+ ##PC##= 0x14A64
+</pre>
+In the latter two cases some additional information is given about
+how malloc was called when the leaked object was allocated. For
+Solaris, the first line specifies the arguments to <TT>GC_debug_malloc</tt>
+(the actual allocation routine), The second the program counter inside
+main, the third the arguments to <TT>main</tt>, and finally the program
+counter inside the caller to main (i.e. in the C startup code).
+<P>
+In the Irix case, only the address inside the caller to main is given.
+<P>
+In many cases, a debugger is needed to interpret the additional information.
+On systems supporting the "adb" debugger, the <TT>callprocs</tt> script
+can be used to replace program counter values with symbolic names.
+As of version 6.1, the collector tries to generate symbolic names for
+call stacks if it knows how to do so on the platform. This is true on
+Linux/X86, but not on most other platforms.
+<H2>Simplified leak detection under Linux</h2>
+Since version 6.1, it should be possible to run the collector in leak
+detection mode on a program a.out under Linux/X86 as follows:
+<OL>
+<LI> Ensure that a.out is a single-threaded executable. This doesn't yet work
+for multithreaded programs.
+<LI> If possible, ensure that the addr2line program is installed in
+/usr/bin. (It comes with RedHat Linux.)
+<LI> If possible, compile a.out with full debug information.
+This will improve the quality of the leak reports. With this approach, it is
+no longer necessary to call GC_ routines explicitly, though that can also
+improve the quality of the leak reports.
+<LI> Build the collector and install it in directory <I>foo</i> as follows:
+<UL>
+<LI> configure --prefix=<I>foo</i> --enable-full-debug --enable-redirect-malloc
+--disable-threads
+<LI> make
+<LI> make install
+</ul>
+<LI> Set environment variables as follows:
+<UL>
+<LI> LD_PRELOAD=<I>foo</i>/lib/libgc.so
+<LI> GC_FIND_LEAK
+<LI> You may also want to set GC_PRINT_STATS (to confirm that the collector
+is running) and/or GC_LOOP_ON_ABORT (to facilitate debugging from another
+window if something goes wrong).
+</ul
+<LI> Simply run a.out as you normally would. Note that if you run anything
+else (<I>e.g.</i> your editor) with those environment variables set,
+it will also be leak tested. This may or may not be useful and/or
+embarrassing. It can generate
+mountains of leak reports if the application wasn't designed to avoid leaks,
+<I>e.g.</i> because it's always short-lived.
+</ol>
+This has not yet been thropughly tested on large applications, but it's known
+to do the right thing on at least some small ones.
+</body>
+</html>
Added: llvm-gcc-4.2/trunk/boehm-gc/doc/scale.html
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/doc/scale.html?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/doc/scale.html (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/doc/scale.html Thu Nov 8 16:56:19 2007
@@ -0,0 +1,210 @@
+<HTML>
+<HEAD>
+<TITLE>Garbage collector scalability</TITLE>
+</HEAD>
+<BODY>
+<H1>Garbage collector scalability</h1>
+In its default configuration, the Boehm-Demers-Weiser garbage collector
+is not thread-safe. It can be made thread-safe for a number of environments
+by building the collector with the appropriate
+<TT>-D</tt><I>XXX</i><TT>-THREADS</tt> compilation
+flag. This has primarily two effects:
+<OL>
+<LI> It causes the garbage collector to stop all other threads when
+it needs to see a consistent memory state.
+<LI> It causes the collector to acquire a lock around essentially all
+allocation and garbage collection activity.
+</ol>
+Since a single lock is used for all allocation-related activity, only one
+thread can be allocating or collecting at one point. This inherently
+limits performance of multi-threaded applications on multiprocessors.
+<P>
+On most platforms, the allocator/collector lock is implemented as a
+spin lock with exponential back-off. Longer wait times are implemented
+by yielding and/or sleeping. If a collection is in progress, the pure
+spinning stage is skipped. This has the advantage that uncontested and
+thus most uniprocessor lock acquisitions are very cheap. It has the
+disadvantage that the application may sleep for small periods of time
+even when there is work to be done. And threads may be unnecessarily
+woken up for short periods. Nonetheless, this scheme empirically
+outperforms native queue-based mutual exclusion implementations in most
+cases, sometimes drastically so.
+<H2>Options for enhanced scalability</h2>
+Version 6.0 of the collector adds two facilities to enhance collector
+scalability on multiprocessors. As of 6.0alpha1, these are supported
+only under Linux on X86 and IA64 processors, though ports to other
+otherwise supported Pthreads platforms should be straightforward.
+They are intended to be used together.
+<UL>
+<LI>
+Building the collector with <TT>-DPARALLEL_MARK</tt> allows the collector to
+run the mark phase in parallel in multiple threads, and thus on multiple
+processors. The mark phase typically consumes the large majority of the
+collection time. Thus this largely parallelizes the garbage collector
+itself, though not the allocation process. Currently the marking is
+performed by the thread that triggered the collection, together with
+<I>N</i>-1 dedicated
+threads, where <I>N</i> is the number of processors detected by the collector.
+The dedicated threads are created once at initialization time.
+<P>
+A second effect of this flag is to switch to a more concurrent
+implementation of <TT>GC_malloc_many</tt>, so that free lists can be
+built, and memory can be cleared, by more than one thread concurrently.
+<LI>
+Building the collector with -DTHREAD_LOCAL_ALLOC adds support for thread
+local allocation. It does not, by itself, cause thread local allocation
+to be used. It simply allows the use of the interface in
+<TT>gc_local_alloc.h</tt>.
+<P>
+Memory returned from thread-local allocators is completely interchangeable
+with that returned by the standard allocators. It may be used by other
+threads. The only difference is that, if the thread allocates enough
+memory of a certain kind, it will build a thread-local free list for
+objects of that kind, and allocate from that. This greatly reduces
+locking. The thread-local free lists are refilled using
+<TT>GC_malloc_many</tt>.
+<P>
+An important side effect of this flag is to replace the default
+spin-then-sleep lock to be replace by a spin-then-queue based implementation.
+This <I>reduces performance</i> for the standard allocation functions,
+though it usually improves performance when thread-local allocation is
+used heavily, and thus the number of short-duration lock acquisitions
+is greatly reduced.
+</ul>
+<P>
+The easiest way to switch an application to thread-local allocation is to
+<OL>
+<LI> Define the macro <TT>GC_REDIRECT_TO_LOCAL</tt>,
+and then include the <TT>gc.h</tt>
+header in each client source file.
+<LI> Invoke <TT>GC_thr_init()</tt> before any allocation.
+<LI> Allocate using <TT>GC_MALLOC</tt>, <TT>GC_MALLOC_ATOMIC</tt>,
+and/or <TT>GC_GCJ_MALLOC</tt>.
+</ol>
+<H2>The Parallel Marking Algorithm</h2>
+We use an algorithm similar to
+<A HREF="http://www.yl.is.s.u-tokyo.ac.jp/gc/">that developed by
+Endo, Taura, and Yonezawa</a> at the University of Tokyo.
+However, the data structures and implementation are different,
+and represent a smaller change to the original collector source,
+probably at the expense of extreme scalability. Some of
+the refinements they suggest, <I>e.g.</i> splitting large
+objects, were also incorporated into out approach.
+<P>
+The global mark stack is transformed into a global work queue.
+Unlike the usual case, it never shrinks during a mark phase.
+The mark threads remove objects from the queue by copying them to a
+local mark stack and changing the global descriptor to zero, indicating
+that there is no more work to be done for this entry.
+This removal
+is done with no synchronization. Thus it is possible for more than
+one worker to remove the same entry, resulting in some work duplication.
+<P>
+The global work queue grows only if a marker thread decides to
+return some of its local mark stack to the global one. This
+is done if the global queue appears to be running low, or if
+the local stack is in danger of overflowing. It does require
+synchronization, but should be relatively rare.
+<P>
+The sequential marking code is reused to process local mark stacks.
+Hence the amount of additional code required for parallel marking
+is minimal.
+<P>
+It should be possible to use generational collection in the presence of the
+parallel collector, by calling <TT>GC_enable_incremental()</tt>.
+This does not result in fully incremental collection, since parallel mark
+phases cannot currently be interrupted, and doing so may be too
+expensive.
+<P>
+Gcj-style mark descriptors do not currently mix with the combination
+of local allocation and incremental collection. They should work correctly
+with one or the other, but not both.
+<P>
+The number of marker threads is set on startup to the number of
+available processors (or to the value of the <TT>GC_NPROCS</tt>
+environment variable). If only a single processor is detected,
+parallel marking is disabled.
+<P>
+Note that setting GC_NPROCS to 1 also causes some lock acquisitions inside
+the collector to immediately yield the processor instead of busy waiting
+first. In the case of a multiprocessor and a client with multiple
+simultaneously runnable threads, this may have disastrous performance
+consequences (e.g. a factor of 10 slowdown).
+<H2>Performance</h2>
+We conducted some simple experiments with a version of
+<A HREF="gc_bench.html">our GC benchmark</a> that was slightly modified to
+run multiple concurrent client threads in the same address space.
+Each client thread does the same work as the original benchmark, but they share
+a heap.
+This benchmark involves very little work outside of memory allocation.
+This was run with GC 6.0alpha3 on a dual processor Pentium III/500 machine
+under Linux 2.2.12.
+<P>
+Running with a thread-unsafe collector, the benchmark ran in 9
+seconds. With the simple thread-safe collector,
+built with <TT>-DLINUX_THREADS</tt>, the execution time
+increased to 10.3 seconds, or 23.5 elapsed seconds with two clients.
+(The times for the <TT>malloc</tt>/i<TT>free</tt> version
+with glibc <TT>malloc</tt>
+are 10.51 (standard library, pthreads not linked),
+20.90 (one thread, pthreads linked),
+and 24.55 seconds respectively. The benchmark favors a
+garbage collector, since most objects are small.)
+<P>
+The following table gives execution times for the collector built
+with parallel marking and thread-local allocation support
+(<TT>-DGC_LINUX_THREADS -DPARALLEL_MARK -DTHREAD_LOCAL_ALLOC</tt>). We tested
+the client using either one or two marker threads, and running
+one or two client threads. Note that the client uses thread local
+allocation exclusively. With -DTHREAD_LOCAL_ALLOC the collector
+switches to a locking strategy that is better tuned to less frequent
+lock acquisition. The standard allocation primitives thus peform
+slightly worse than without -DTHREAD_LOCAL_ALLOC, and should be
+avoided in time-critical code.
+<P>
+(The results using <TT>pthread_mutex_lock</tt>
+directly for allocation locking would have been worse still, at
+least for older versions of linuxthreads.
+With THREAD_LOCAL_ALLOC, we first repeatedly try to acquire the
+lock with pthread_mutex_try_lock(), busy_waiting between attempts.
+After a fixed number of attempts, we use pthread_mutex_lock().)
+<P>
+These measurements do not use incremental collection, nor was prefetching
+enabled in the marker. We used the C version of the benchmark.
+All measurements are in elapsed seconds on an unloaded machine.
+<P>
+<TABLE BORDER ALIGN="CENTER">
+<TR><TH>Number of threads</th><TH>1 marker thread (secs.)</th>
+<TH>2 marker threads (secs.)</th></tr>
+<TR><TD>1 client</td><TD ALIGN="CENTER">10.45</td><TD ALIGN="CENTER">7.85</td>
+<TR><TD>2 clients</td><TD ALIGN="CENTER">19.95</td><TD ALIGN="CENTER">12.3</td>
+</table>
+<PP>
+The execution time for the single threaded case is slightly worse than with
+simple locking. However, even the single-threaded benchmark runs faster than
+even the thread-unsafe version if a second processor is available.
+The execution time for two clients with thread local allocation time is
+only 1.4 times the sequential execution time for a single thread in a
+thread-unsafe environment, even though it involves twice the client work.
+That represents close to a
+factor of 2 improvement over the 2 client case with the old collector.
+The old collector clearly
+still suffered from some contention overhead, in spite of the fact that the
+locking scheme had been fairly well tuned.
+<P>
+Full linear speedup (i.e. the same execution time for 1 client on one
+processor as 2 clients on 2 processors)
+is probably not achievable on this kind of
+hardware even with such a small number of processors,
+since the memory system is
+a major constraint for the garbage collector,
+the processors usually share a single memory bus, and thus
+the aggregate memory bandwidth does not increase in
+proportion to the number of processors.
+<P>
+These results are likely to be very sensitive to both hardware and OS
+issues. Preliminary experiments with an older Pentium Pro machine running
+an older kernel were far less encouraging.
+
+</body>
+</html>
Added: llvm-gcc-4.2/trunk/boehm-gc/doc/simple_example.html
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/doc/simple_example.html?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/doc/simple_example.html (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/doc/simple_example.html Thu Nov 8 16:56:19 2007
@@ -0,0 +1,202 @@
+<HTML>
+<HEAD>
+<TITLE>Using the Garbage Collector: A simple example</title>
+</head>
+<BODY>
+<H1>Using the Garbage Collector: A simple example</h1>
+The following consists of step-by-step instructions for building and
+using the collector. We'll assume a Linux/gcc platform and
+a single-threaded application. <FONT COLOR=green>The green
+text contains information about other platforms or scenarios.
+It can be skipped, especially on first reading</font>.
+<H2>Building the collector</h2>
+If you haven't already so, unpack the collector and enter
+the newly created directory with
+<PRE>
+tar xvfz gc<version>.tar.gz
+cd gc<version>
+</pre>
+<P>
+You can configure, build, and install the collector in a private
+directory, say /home/xyz/gc, with the following commands:
+<PRE>
+./configure --prefix=/home/xyz/gc --disable-threads
+make
+make check
+make install
+</pre>
+Here the "<TT>make check</tt>" command is optional, but highly recommended.
+It runs a basic correctness test which usually takes well under a minute.
+<FONT COLOR=green>
+<H3>Other platforms</h3>
+On non-Unix, non-Linux platforms, the collector is usually built by copying
+the appropriate makefile (see the platform-specific README in doc/README.xxx
+in the distribution) to the file "Makefile" (overwriting the copy of
+Makefile.direct that was originally there), and then typing "make"
+(or "nmake" or ...). This builds the library in the source tree. You may
+want to move it and the files in the include directory to a more convenient
+place.
+<P>
+If you use a makefile that does not require running a configure script,
+you should first look at the makefile, and adjust any options that are
+documented there.
+<P>
+If your platform provides a "make" utility, that is generally preferred
+to platform- and compiler- dependent "project" files. (At least that is the
+strong preference of the would-be maintainer of those project files.)
+<H3>Threads</h3>
+If you need thread support, configure the collector with
+<PRE>
+--enable-threads=posix --enable-thread-local-alloc --enable-parallel-mark
+</pre>
+instead of
+<TT>--disable-threads</tt>
+If your target is a real old-fashioned uniprocessor (no "hyperthreading",
+etc.) you will want to omit <TT>--enable-parallel-mark</tt>.
+<H3>C++</h3>
+You will need to include the C++ support, which unfortunately tends to
+be among the least portable parts of the collector, since it seems
+to rely on some corner cases of the language. On Linux, it
+suffices to add <TT>--enable-cplusplus</tt> to the configure options.
+</font>
+<H2>Writing the program</h2>
+You will need a
+<PRE>
+#include "gc.h"
+</pre>
+at the beginning of every file that allocates memory through the
+garbage collector. Call <TT>GC_MALLOC</tt> wherever you would
+have call <TT>malloc</tt>. This initializes memory to zero like
+<TT>calloc</tt>; there is no need to explicitly clear the
+result.
+<P>
+If you know that an object will not contain pointers to the
+garbage-collected heap, and you don't need it to be initialized,
+call <TT>GC_MALLOC_ATOMIC</tt> instead.
+<P>
+A function <TT>GC_FREE</tt> is provided but need not be called.
+For very small objects, your program will probably perform better if
+you do not call it, and let the collector do its job.
+<P>
+A <TT>GC_REALLOC</tt> function behaves like the C library <TT>realloc</tt>.
+It allocates uninitialized pointer-free memory if the original
+object was allocated that way.
+<P>
+The following program <TT>loop.c</tt> is a trivial example:
+<PRE>
+#include "gc.h"
+#include <assert.h>
+#include <stdio.h>
+
+int main()
+{
+ int i;
+
+ GC_INIT(); /* Optional on Linux/X86; see below. */
+ for (i = 0; i < 10000000; ++i)
+ {
+ int **p = (int **) GC_MALLOC(sizeof(int *));
+ int *q = (int *) GC_MALLOC_ATOMIC(sizeof(int));
+ assert(*p == 0);
+ *p = (int *) GC_REALLOC(q, 2 * sizeof(int));
+ if (i % 100000 == 0)
+ printf("Heap size = %d\n", GC_get_heap_size());
+ }
+ return 0;
+}
+</pre>
+<FONT COLOR=green>
+<H3>Interaction with the system malloc</h3>
+It is usually best not to mix garbage-collected allocation with the system
+<TT>malloc-free</tt>. If you do, you need to be careful not to store
+pointers to the garbage-collected heap in memory allocated with the system
+<TT>malloc</tt>.
+<H3>Other Platforms</h3>
+On some other platforms it is necessary to call <TT>GC_INIT()</tt> from the main program,
+which is presumed to be part of the main executable, not a dynamic library.
+This can never hurt, and is thus generally good practice.
+
+<H3>Threads</h3>
+For a multithreaded program some more rules apply:
+<UL>
+<LI>
+Files that either allocate through the GC <I>or make thread-related calls</i>
+should first define the macro <TT>GC_THREADS</tt>, and then
+include <TT>"gc.h"</tt>. On some platforms this will redefine some
+threads primitives, e.g. to let the collector keep track of thread creation.
+<LI>
+To take advantage of fast thread-local allocation, use the following instead
+of including <TT>gc.h</tt>:
+<PRE>
+#define GC_REDIRECT_TO_LOCAL
+#include "gc_local_alloc.h"
+</pre>
+This will cause GC_MALLOC and GC_MALLOC_ATOMIC to keep per-thread allocation
+caches, and greatly reduce the number of lock acquisitions during allocation.
+</ul>
+
+<H3>C++</h3>
+In the case of C++, you need to be especially careful not to store pointers
+to the garbage-collected heap in areas that are not traced by the collector.
+The collector includes some <A HREF="gcinterface.html">alternate interfaces</a>
+to make that easier.
+
+<H3>Debugging</h3>
+Additional debug checks can be performed by defining <TT>GC_DEBUG</tt> before
+including <TT>gc.h</tt>. Additional options are available if the collector
+is also built with <TT>--enable-full_debug</tt> and all allocations are
+performed with <TT>GC_DEBUG</tt> defined.
+
+<H3>What if I can't rewrite/recompile my program?</h3>
+You may be able to build the collector with <TT>--enable-redirect-malloc</tt>
+and set the <TT>LD_PRELOAD</tt> environment variable to point to the resulting
+library, thus replacing the standard <TT>malloc</tt> with its garbage-collected
+counterpart. This is rather platform dependent. See the
+<A HREF="leak.html">leak detection documentation</a> for some more details.
+
+</font>
+
+<H2>Compiling and linking</h2>
+
+The above application <TT>loop.c</tt> test program can be compiled and linked
+with
+
+<PRE>
+cc -I/home/xyz/gc/include loop.c /home/xyz/gc/lib/libgc.a -o loop
+</pre>
+
+The <TT>-I</tt> option directs the compiler to the right include
+directory. In this case, we list the static library
+directly on the compile line; the dynamic library could have been
+used instead, provided we arranged for the dynamic loader to find
+it, e.g. by setting <TT>LD_LIBRARY_PATH</tt>.
+
+<FONT COLOR=green>
+
+<H3>Threads</h3>
+
+On pthread platforms, you will of course also have to link with
+<TT>-lpthread</tt>,
+and compile with any thread-safety options required by your compiler.
+On some platforms, you may also need to link with <TT>-ldl</tt>
+or <TT>-lrt</tt>.
+Looking at threadlibs.c in the GC build directory
+should give you the appropriate
+list if a plain <TT>-lpthread</tt> doesn't work.
+
+</font>
+
+<H2>Running the executable</h2>
+
+The executable can of course be run normally, e.g. by typing
+
+<PRE>
+./loop
+</pre>
+
+The operation of the collector is affected by a number of environment variables.
+For example, setting <TT>GC_PRINT_STATS</tt> produces some
+GC statistics on stdout.
+See <TT>README.environment</tt> in the distribution for details.
+</body>
+</html>
Added: llvm-gcc-4.2/trunk/boehm-gc/doc/tree.html
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/doc/tree.html?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/doc/tree.html (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/doc/tree.html Thu Nov 8 16:56:19 2007
@@ -0,0 +1,199 @@
+<HTML>
+<HEAD>
+ <TITLE> Two-Level Tree Structure for Fast Pointer Lookup</TITLE>
+ <AUTHOR> Hans-J. Boehm, Silicon Graphics (now at HP)</author>
+</HEAD>
+<BODY>
+<H1>Two-Level Tree Structure for Fast Pointer Lookup</h1>
+<P>
+The conservative garbage collector described
+<A HREF="http://www.hpl.hp.com/personal/Hans_Boehm/gc/">here</a>
+uses a 2-level tree
+data structure to aid in fast pointer identification.
+This data structure is described in a bit more detail here, since
+<OL>
+<LI> Variations of the data structure are more generally useful.
+<LI> It appears to be hard to understand by reading the code.
+<LI> Some other collectors appear to use inferior data structures to
+solve the same problem.
+<LI> It is central to fast collector operation.
+</ol>
+A candidate pointer is divided into three sections, the <I>high</i>,
+<I>middle</i>, and <I>low</i> bits. The exact division between these
+three groups of bits is dependent on the detailed collector configuration.
+<P>
+The high and middle bits are used to look up an entry in the table described
+here. The resulting table entry consists of either a block descriptor
+(<TT>struct hblkhdr *</tt> or <TT>hdr *</tt>)
+identifying the layout of objects in the block, or an indication that this
+address range corresponds to the middle of a large block, together with a
+hint for locating the actual block descriptor. Such a hint consist
+of a displacement that can be subtracted from the middle bits of the candidate
+pointer without leaving the object.
+<P>
+In either case, the block descriptor (<TT>struct hblkhdr</tt>)
+refers to a table of object starting addresses (the <TT>hb_map</tt> field).
+The starting address table is indexed by the low bits if the candidate pointer.
+The resulting entry contains a displacement to the beginning of the object,
+or an indication that this cannot be a valid object pointer.
+(If all interior pointer are recognized, pointers into large objects
+are handled specially, as appropriate.)
+
+<H2>The Tree</h2>
+<P>
+The rest of this discussion focuses on the two level data structure
+used to map the high and middle bits to the block descriptor.
+<P>
+The high bits are used as an index into the <TT>GC_top_index</tt> (really
+<TT>GC_arrays._top_index</tt>) array. Each entry points to a
+<TT>bottom_index</tt> data structure. This structure in turn consists
+mostly of an array <TT>index</tt> indexed by the middle bits of
+the candidate pointer. The <TT>index</tt> array contains the actual
+<TT>hdr</tt> pointers.
+<P>
+Thus a pointer lookup consists primarily of a handful of memory references,
+and can be quite fast:
+<OL>
+<LI> The appropriate <TT>bottom_index</tt> pointer is looked up in
+<TT>GC_top_index</tt>, based on the high bits of the candidate pointer.
+<LI> The appropriate <TT>hdr</tt> pointer is looked up in the
+<TT>bottom_index</tt> structure, based on the middle bits.
+<LI> The block layout map pointer is retrieved from the <TT>hdr</tt>
+structure. (This memory reference is necessary since we try to share
+block layout maps.)
+<LI> The displacement to the beginning of the object is retrieved from the
+above map.
+</ol>
+<P>
+In order to conserve space, not all <TT>GC_top_index</tt> entries in fact
+point to distinct <TT>bottom_index</tt> structures. If no address with
+the corresponding high bits is part of the heap, then the entry points
+to <TT>GC_all_nils</tt>, a single <TT>bottom_index</tt> structure consisting
+only of NULL <TT>hdr</tt> pointers.
+<P>
+<TT>Bottom_index</tt> structures contain slightly more information than
+just <TT>hdr</tt> pointers. The <TT>asc_link</tt> field is used to link
+all <TT>bottom_index</tt> structures in ascending order for fast traversal.
+This list is pointed to be <TT>GC_all_bottom_indices</tt>.
+It is maintained with the aid of <TT>key</tt> field that contains the
+high bits corresponding to the <TT>bottom_index</tt>.
+
+<H2>64 bit addresses</h2>
+<P>
+In the case of 64 bit addresses, this picture is complicated slightly
+by the fact that one of the index structures would have to be huge to
+cover the entire address space with a two level tree. We deal with this
+by turning <TT>GC_top_index</tt> into a chained hash table, instead of
+a simple array. This adds a <TT>hash_link</tt> field to the
+<TT>bottom_index</tt> structure.
+<P>
+The "hash function" consists of dropping the high bits. This is cheap to
+compute, and guarantees that there will be no collisions if the heap
+is contiguous and not excessively large.
+
+<H2>A picture</h2>
+<P>
+The following is an ASCII diagram of the data structure.
+This was contributed by Dave Barrett several years ago.
+<PRE>
+
+ Data Structure used by GC_base in gc3.7:
+ 21-Apr-94
+
+
+
+
+ 63 LOG_TOP_SZ[11] LOG_BOTTOM_SZ[10] LOG_HBLKSIZE[13]
+ +------------------+----------------+------------------+------------------+
+ p:| | TL_HASH(hi) | | HBLKDISPL(p) |
+ +------------------+----------------+------------------+------------------+
+ \-----------------------HBLKPTR(p)-------------------/
+ \------------hi-------------------/
+ \______ ________/ \________ _______/ \________ _______/
+ V V V
+ | | |
+ GC_top_index[] | | |
+ --- +--------------+ | | |
+ ^ | | | | |
+ | | | | | |
+ TOP +--------------+<--+ | |
+ _SZ +-<| [] | * | |
+(items)| +--------------+ if 0 < bi< HBLKSIZE | |
+ | | | | then large object | |
+ | | | | starts at the bi'th | |
+ v | | | HBLK before p. | i |
+ --- | +--------------+ | (word- |
+ v | aligned) |
+ bi= |GET_BI(p){->hash_link}->key==hi | |
+ v | |
+ | (bottom_index) \ scratch_alloc'd | |
+ | ( struct bi ) / by get_index() | |
+ --- +->+--------------+ | |
+ ^ | | | |
+ ^ | | | |
+ BOTTOM | | ha=GET_HDR_ADDR(p) | |
+_SZ(items)+--------------+<----------------------+ +-------+
+ | +--<| index[] | |
+ | | +--------------+ GC_obj_map: v
+ | | | | from / +-+-+-----+-+-+-+-+ ---
+ v | | | GC_add < 0| | | | | | | | ^
+ --- | +--------------+ _map_entry \ +-+-+-----+-+-+-+-+ |
+ | | asc_link | +-+-+-----+-+-+-+-+ MAXOBJSZ
+ | +--------------+ +-->| | | j | | | | | +1
+ | | key | | +-+-+-----+-+-+-+-+ |
+ | +--------------+ | +-+-+-----+-+-+-+-+ |
+ | | hash_link | | | | | | | | | | v
+ | +--------------+ | +-+-+-----+-+-+-+-+ ---
+ | | |<--MAX_OFFSET--->|
+ | | (bytes)
+HDR(p)| GC_find_header(p) | |<--MAP_ENTRIES-->|
+ | \ from | =HBLKSIZE/WORDSZ
+ | (hdr) (struct hblkhdr) / alloc_hdr() | (1024 on Alpha)
+ +-->+----------------------+ | (8/16 bits each)
+GET_HDR(p)| word hb_sz (words) | |
+ +----------------------+ |
+ | struct hblk *hb_next | |
+ +----------------------+ |
+ |mark_proc hb_mark_proc| |
+ +----------------------+ |
+ | char * hb_map |>-------------+
+ +----------------------+
+ | ushort hb_obj_kind |
+ +----------------------+
+ | hb_last_reclaimed |
+ --- +----------------------+
+ ^ | |
+ MARK_BITS| hb_marks[] | *if hdr is free, hb_sz + DISCARD_WORDS
+_SZ(words)| | is the size of a heap chunk (struct hblk)
+ v | | of at least MININCR*HBLKSIZE bytes (below),
+ --- +----------------------+ otherwise, size of each object in chunk.
+
+Dynamic data structures above are interleaved throughout the heap in blocks of
+size MININCR * HBLKSIZE bytes as done by gc_scratch_alloc which cannot be
+freed; free lists are used (e.g. alloc_hdr). HBLK's below are collected.
+
+ (struct hblk)
+ --- +----------------------+ < HBLKSIZE --- --- DISCARD_
+ ^ |garbage[DISCARD_WORDS]| aligned ^ ^ HDR_BYTES WORDS
+ | | | | v (bytes) (words)
+ | +-----hb_body----------+ < WORDSZ | --- ---
+ | | | aligned | ^ ^
+ | | Object 0 | | hb_sz |
+ | | | i |(word- (words)|
+ | | | (bytes)|aligned) v |
+ | + - - - - - - - - - - -+ --- | --- |
+ | | | ^ | ^ |
+ n * | | j (words) | hb_sz BODY_SZ
+ HBLKSIZE | Object 1 | v v | (words)
+ (bytes) | |--------------- v MAX_OFFSET
+ | + - - - - - - - - - - -+ --- (bytes)
+ | | | !All_INTERIOR_PTRS ^ |
+ | | | sets j only for hb_sz |
+ | | Object N | valid object offsets. | |
+ v | | All objects WORDSZ v v
+ --- +----------------------+ aligned. --- ---
+
+DISCARD_WORDS is normally zero. Indeed the collector has not been tested
+with another value in ages.
+</pre>
+</body>
Added: llvm-gcc-4.2/trunk/boehm-gc/dyn_load.c
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/dyn_load.c?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/dyn_load.c (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/dyn_load.c Thu Nov 8 16:56:19 2007
@@ -0,0 +1,1334 @@
+/*
+ * Copyright (c) 1991-1994 by Xerox Corporation. All rights reserved.
+ * Copyright (c) 1997 by Silicon Graphics. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ *
+ * Original author: Bill Janssen
+ * Heavily modified by Hans Boehm and others
+ */
+
+/*
+ * This is incredibly OS specific code for tracking down data sections in
+ * dynamic libraries. There appears to be no way of doing this quickly
+ * without groveling through undocumented data structures. We would argue
+ * that this is a bug in the design of the dlopen interface. THIS CODE
+ * MAY BREAK IN FUTURE OS RELEASES. If this matters to you, don't hesitate
+ * to let your vendor know ...
+ *
+ * None of this is safe with dlclose and incremental collection.
+ * But then not much of anything is safe in the presence of dlclose.
+ */
+#if (defined(__linux__) || defined(__GLIBC__)) && !defined(_GNU_SOURCE)
+ /* Can't test LINUX, since this must be define before other includes */
+# define _GNU_SOURCE
+#endif
+#if !defined(MACOS) && !defined(_WIN32_WCE)
+# include <sys/types.h>
+#endif
+#include "private/gc_priv.h"
+
+/* BTL: avoid circular redefinition of dlopen if GC_SOLARIS_THREADS defined */
+# if (defined(GC_PTHREADS) || defined(GC_SOLARIS_THREADS)) \
+ && defined(dlopen) && !defined(GC_USE_LD_WRAP)
+ /* To support threads in Solaris, gc.h interposes on dlopen by */
+ /* defining "dlopen" to be "GC_dlopen", which is implemented below. */
+ /* However, both GC_FirstDLOpenedLinkMap() and GC_dlopen() use the */
+ /* real system dlopen() in their implementation. We first remove */
+ /* gc.h's dlopen definition and restore it later, after GC_dlopen(). */
+# undef dlopen
+# define GC_must_restore_redefined_dlopen
+# else
+# undef GC_must_restore_redefined_dlopen
+# endif
+
+#if (defined(DYNAMIC_LOADING) || defined(MSWIN32) || defined(MSWINCE)) \
+ && !defined(PCR)
+#if !defined(SUNOS4) && !defined(SUNOS5DL) && !defined(IRIX5) && \
+ !defined(MSWIN32) && !defined(MSWINCE) && \
+ !(defined(ALPHA) && defined(OSF1)) && \
+ !defined(HPUX) && !(defined(LINUX) && defined(__ELF__)) && \
+ !defined(RS6000) && !defined(SCO_ELF) && !defined(DGUX) && \
+ !(defined(FREEBSD) && defined(__ELF__)) && \
+ !(defined(NETBSD) && defined(__ELF__)) && !defined(HURD) && \
+ !defined(DARWIN)
+ --> We only know how to find data segments of dynamic libraries for the
+ --> above. Additional SVR4 variants might not be too
+ --> hard to add.
+#endif
+
+#include <stdio.h>
+#ifdef SUNOS5DL
+# include <sys/elf.h>
+# include <dlfcn.h>
+# include <link.h>
+#endif
+#ifdef SUNOS4
+# include <dlfcn.h>
+# include <link.h>
+# include <a.out.h>
+ /* struct link_map field overrides */
+# define l_next lm_next
+# define l_addr lm_addr
+# define l_name lm_name
+#endif
+
+#if defined(NETBSD)
+# include <machine/elf_machdep.h>
+# define ELFSIZE ARCH_ELFSIZE
+#endif
+
+#if defined(LINUX) && defined(__ELF__) || defined(SCO_ELF) || \
+ (defined(FREEBSD) && defined(__ELF__)) || defined(DGUX) || \
+ (defined(NETBSD) && defined(__ELF__)) || defined(HURD)
+# include <stddef.h>
+# include <elf.h>
+# include <link.h>
+#endif
+
+/* Newer versions of GNU/Linux define this macro. We
+ * define it similarly for any ELF systems that don't. */
+# ifndef ElfW
+# if defined(FREEBSD)
+# if __ELF_WORD_SIZE == 32
+# define ElfW(type) Elf32_##type
+# else
+# define ElfW(type) Elf64_##type
+# endif
+# else
+# ifdef NETBSD
+# if ELFSIZE == 32
+# define ElfW(type) Elf32_##type
+# else
+# define ElfW(type) Elf64_##type
+# endif
+# else
+# if !defined(ELF_CLASS) || ELF_CLASS == ELFCLASS32
+# define ElfW(type) Elf32_##type
+# else
+# define ElfW(type) Elf64_##type
+# endif
+# endif
+# endif
+# endif
+
+/* An user-supplied routine that is called to dtermine if a DSO must
+ be scanned by the gc. */
+static int (*GC_has_static_roots)(const char *, void *, size_t);
+/* Register the routine. */
+void
+GC_register_has_static_roots_callback
+ (int (*callback)(const char *, void *, size_t))
+{
+ GC_has_static_roots = callback;
+}
+
+#if defined(SUNOS5DL) && !defined(USE_PROC_FOR_LIBRARIES)
+
+#ifdef LINT
+ Elf32_Dyn _DYNAMIC;
+#endif
+
+static struct link_map *
+GC_FirstDLOpenedLinkMap()
+{
+ extern ElfW(Dyn) _DYNAMIC;
+ ElfW(Dyn) *dp;
+ struct r_debug *r;
+ static struct link_map * cachedResult = 0;
+ static ElfW(Dyn) *dynStructureAddr = 0;
+ /* BTL: added to avoid Solaris 5.3 ld.so _DYNAMIC bug */
+
+# ifdef SUNOS53_SHARED_LIB
+ /* BTL: Avoid the Solaris 5.3 bug that _DYNAMIC isn't being set */
+ /* up properly in dynamically linked .so's. This means we have */
+ /* to use its value in the set of original object files loaded */
+ /* at program startup. */
+ if( dynStructureAddr == 0 ) {
+ void* startupSyms = dlopen(0, RTLD_LAZY);
+ dynStructureAddr = (ElfW(Dyn)*)dlsym(startupSyms, "_DYNAMIC");
+ }
+# else
+ dynStructureAddr = &_DYNAMIC;
+# endif
+
+ if( dynStructureAddr == 0) {
+ return(0);
+ }
+ if( cachedResult == 0 ) {
+ int tag;
+ for( dp = ((ElfW(Dyn) *)(&_DYNAMIC)); (tag = dp->d_tag) != 0; dp++ ) {
+ if( tag == DT_DEBUG ) {
+ struct link_map *lm
+ = ((struct r_debug *)(dp->d_un.d_ptr))->r_map;
+ if( lm != 0 ) cachedResult = lm->l_next; /* might be NIL */
+ break;
+ }
+ }
+ }
+ return cachedResult;
+}
+
+#endif /* SUNOS5DL ... */
+
+/* BTL: added to fix circular dlopen definition if GC_SOLARIS_THREADS defined */
+# if defined(GC_must_restore_redefined_dlopen)
+# define dlopen GC_dlopen
+# endif
+
+#if defined(SUNOS4) && !defined(USE_PROC_FOR_LIBRARIES)
+
+#ifdef LINT
+ struct link_dynamic _DYNAMIC;
+#endif
+
+static struct link_map *
+GC_FirstDLOpenedLinkMap()
+{
+ extern struct link_dynamic _DYNAMIC;
+
+ if( &_DYNAMIC == 0) {
+ return(0);
+ }
+ return(_DYNAMIC.ld_un.ld_1->ld_loaded);
+}
+
+/* Return the address of the ld.so allocated common symbol */
+/* with the least address, or 0 if none. */
+static ptr_t GC_first_common()
+{
+ ptr_t result = 0;
+ extern struct link_dynamic _DYNAMIC;
+ struct rtc_symb * curr_symbol;
+
+ if( &_DYNAMIC == 0) {
+ return(0);
+ }
+ curr_symbol = _DYNAMIC.ldd -> ldd_cp;
+ for (; curr_symbol != 0; curr_symbol = curr_symbol -> rtc_next) {
+ if (result == 0
+ || (ptr_t)(curr_symbol -> rtc_sp -> n_value) < result) {
+ result = (ptr_t)(curr_symbol -> rtc_sp -> n_value);
+ }
+ }
+ return(result);
+}
+
+#endif /* SUNOS4 ... */
+
+# if defined(SUNOS4) || defined(SUNOS5DL)
+/* Add dynamic library data sections to the root set. */
+# if !defined(PCR) && !defined(GC_SOLARIS_THREADS) && defined(THREADS)
+# ifndef SRC_M3
+ --> fix mutual exclusion with dlopen
+# endif /* We assume M3 programs don't call dlopen for now */
+# endif
+
+# ifndef USE_PROC_FOR_LIBRARIES
+void GC_register_dynamic_libraries()
+{
+ struct link_map *lm = GC_FirstDLOpenedLinkMap();
+
+
+ for (lm = GC_FirstDLOpenedLinkMap();
+ lm != (struct link_map *) 0; lm = lm->l_next)
+ {
+# ifdef SUNOS4
+ struct exec *e;
+
+ e = (struct exec *) lm->lm_addr;
+ GC_add_roots_inner(
+ ((char *) (N_DATOFF(*e) + lm->lm_addr)),
+ ((char *) (N_BSSADDR(*e) + e->a_bss + lm->lm_addr)),
+ TRUE);
+# endif
+# ifdef SUNOS5DL
+ ElfW(Ehdr) * e;
+ ElfW(Phdr) * p;
+ unsigned long offset;
+ char * start;
+ register int i;
+
+ e = (ElfW(Ehdr) *) lm->l_addr;
+ p = ((ElfW(Phdr) *)(((char *)(e)) + e->e_phoff));
+ offset = ((unsigned long)(lm->l_addr));
+ for( i = 0; i < (int)(e->e_phnum); ((i++),(p++)) ) {
+ switch( p->p_type ) {
+ case PT_LOAD:
+ {
+ if( !(p->p_flags & PF_W) ) break;
+ start = ((char *)(p->p_vaddr)) + offset;
+ GC_add_roots_inner(
+ start,
+ start + p->p_memsz,
+ TRUE
+ );
+ }
+ break;
+ default:
+ break;
+ }
+ }
+# endif
+ }
+# ifdef SUNOS4
+ {
+ static ptr_t common_start = 0;
+ ptr_t common_end;
+ extern ptr_t GC_find_limit();
+
+ if (common_start == 0) common_start = GC_first_common();
+ if (common_start != 0) {
+ common_end = GC_find_limit(common_start, TRUE);
+ GC_add_roots_inner((char *)common_start, (char *)common_end, TRUE);
+ }
+ }
+# endif
+}
+
+# endif /* !USE_PROC ... */
+# endif /* SUNOS */
+
+#if defined(LINUX) && defined(__ELF__) || defined(SCO_ELF) || \
+ (defined(FREEBSD) && defined(__ELF__)) || defined(DGUX) || \
+ (defined(NETBSD) && defined(__ELF__)) || defined(HURD)
+
+
+#ifdef USE_PROC_FOR_LIBRARIES
+
+#include <string.h>
+
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <unistd.h>
+
+#define MAPS_BUF_SIZE (32*1024)
+
+extern ssize_t GC_repeat_read(int fd, char *buf, size_t count);
+ /* Repeatedly read until buffer is filled, or EOF is encountered */
+ /* Defined in os_dep.c. */
+
+char *GC_parse_map_entry(char *buf_ptr, word *start, word *end,
+ char *prot_buf, unsigned int *maj_dev);
+word GC_apply_to_maps(word (*fn)(char *));
+ /* From os_dep.c */
+
+word GC_register_map_entries(char *maps)
+{
+ char prot_buf[5];
+ char *buf_ptr = maps;
+ int count;
+ word start, end;
+ unsigned int maj_dev;
+ word least_ha, greatest_ha;
+ unsigned i;
+ word datastart = (word)(DATASTART);
+
+ /* Compute heap bounds. FIXME: Should be done by add_to_heap? */
+ least_ha = (word)(-1);
+ greatest_ha = 0;
+ for (i = 0; i < GC_n_heap_sects; ++i) {
+ word sect_start = (word)GC_heap_sects[i].hs_start;
+ word sect_end = sect_start + GC_heap_sects[i].hs_bytes;
+ if (sect_start < least_ha) least_ha = sect_start;
+ if (sect_end > greatest_ha) greatest_ha = sect_end;
+ }
+ if (greatest_ha < (word)GC_scratch_last_end_ptr)
+ greatest_ha = (word)GC_scratch_last_end_ptr;
+
+ for (;;) {
+ buf_ptr = GC_parse_map_entry(buf_ptr, &start, &end, prot_buf, &maj_dev);
+ if (buf_ptr == NULL) return 1;
+ if (prot_buf[1] == 'w') {
+ /* This is a writable mapping. Add it to */
+ /* the root set unless it is already otherwise */
+ /* accounted for. */
+ if (start <= (word)GC_stackbottom && end >= (word)GC_stackbottom) {
+ /* Stack mapping; discard */
+ continue;
+ }
+# ifdef THREADS
+ if (GC_segment_is_thread_stack(start, end)) continue;
+# endif
+ /* We no longer exclude the main data segment. */
+ if (start < least_ha && end > least_ha) {
+ end = least_ha;
+ }
+ if (start < greatest_ha && end > greatest_ha) {
+ start = greatest_ha;
+ }
+ if (start >= least_ha && end <= greatest_ha) continue;
+ GC_add_roots_inner((char *)start, (char *)end, TRUE);
+ }
+ }
+ return 1;
+}
+
+void GC_register_dynamic_libraries()
+{
+ if (!GC_apply_to_maps(GC_register_map_entries))
+ ABORT("Failed to read /proc for library registration.");
+}
+
+/* We now take care of the main data segment ourselves: */
+GC_bool GC_register_main_static_data()
+{
+ return FALSE;
+}
+
+# define HAVE_REGISTER_MAIN_STATIC_DATA
+
+#endif /* USE_PROC_FOR_LIBRARIES */
+
+#if !defined(USE_PROC_FOR_LIBRARIES)
+/* The following is the preferred way to walk dynamic libraries */
+/* For glibc 2.2.4+. Unfortunately, it doesn't work for older */
+/* versions. Thanks to Jakub Jelinek for most of the code. */
+
+# if (defined(LINUX) || defined (__GLIBC__)) /* Are others OK here, too? */ \
+ && (__GLIBC__ > 2 || (__GLIBC__ == 2 && __GLIBC_MINOR__ > 2) \
+ || (__GLIBC__ == 2 && __GLIBC_MINOR__ == 2 && defined(DT_CONFIG)))
+
+/* We have the header files for a glibc that includes dl_iterate_phdr. */
+/* It may still not be available in the library on the target system. */
+/* Thus we also treat it as a weak symbol. */
+#define HAVE_DL_ITERATE_PHDR
+
+static int GC_register_dynlib_callback(info, size, ptr)
+ struct dl_phdr_info * info;
+ size_t size;
+ void * ptr;
+{
+ const ElfW(Phdr) * p;
+ char * start;
+ register int i;
+
+ /* Make sure struct dl_phdr_info is at least as big as we need. */
+ if (size < offsetof (struct dl_phdr_info, dlpi_phnum)
+ + sizeof (info->dlpi_phnum))
+ return -1;
+
+ p = info->dlpi_phdr;
+ for( i = 0; i < (int)(info->dlpi_phnum); ((i++),(p++)) ) {
+ switch( p->p_type ) {
+ case PT_LOAD:
+ {
+ if( !(p->p_flags & PF_W) ) break;
+ start = ((char *)(p->p_vaddr)) + info->dlpi_addr;
+
+ if (GC_has_static_roots
+ && !GC_has_static_roots(info->dlpi_name, start, p->p_memsz))
+ break;
+
+ GC_add_roots_inner(start, start + p->p_memsz, TRUE);
+ }
+ break;
+ default:
+ break;
+ }
+ }
+
+ * (int *)ptr = 1; /* Signal that we were called */
+ return 0;
+}
+
+/* Return TRUE if we succeed, FALSE if dl_iterate_phdr wasn't there. */
+
+#pragma weak dl_iterate_phdr
+
+GC_bool GC_register_dynamic_libraries_dl_iterate_phdr()
+{
+ if (dl_iterate_phdr) {
+ int did_something = 0;
+ dl_iterate_phdr(GC_register_dynlib_callback, &did_something);
+ if (!did_something) {
+ /* dl_iterate_phdr may forget the static data segment in */
+ /* statically linked executables. */
+ GC_add_roots_inner(DATASTART, (char *)(DATAEND), TRUE);
+# if defined(DATASTART2)
+ GC_add_roots_inner(DATASTART2, (char *)(DATAEND2), TRUE);
+# endif
+ }
+
+ return TRUE;
+ } else {
+ return FALSE;
+ }
+}
+
+/* Do we need to separately register the main static data segment? */
+GC_bool GC_register_main_static_data()
+{
+ return (dl_iterate_phdr == 0);
+}
+
+#define HAVE_REGISTER_MAIN_STATIC_DATA
+
+# else /* !LINUX || version(glibc) < 2.2.4 */
+
+/* Dynamic loading code for Linux running ELF. Somewhat tested on
+ * Linux/x86, untested but hopefully should work on Linux/Alpha.
+ * This code was derived from the Solaris/ELF support. Thanks to
+ * whatever kind soul wrote that. - Patrick Bridges */
+
+/* This doesn't necessarily work in all cases, e.g. with preloaded
+ * dynamic libraries. */
+
+#if defined(NETBSD)
+# include <sys/exec_elf.h>
+/* for compatibility with 1.4.x */
+# ifndef DT_DEBUG
+# define DT_DEBUG 21
+# endif
+# ifndef PT_LOAD
+# define PT_LOAD 1
+# endif
+# ifndef PF_W
+# define PF_W 2
+# endif
+#else
+# include <elf.h>
+#endif
+#include <link.h>
+
+# endif
+
+#ifdef __GNUC__
+# pragma weak _DYNAMIC
+#endif
+extern ElfW(Dyn) _DYNAMIC[];
+
+static struct link_map *
+GC_FirstDLOpenedLinkMap()
+{
+ ElfW(Dyn) *dp;
+ static struct link_map *cachedResult = 0;
+
+ if( _DYNAMIC == 0) {
+ return(0);
+ }
+ if( cachedResult == 0 ) {
+ int tag;
+ for( dp = _DYNAMIC; (tag = dp->d_tag) != 0; dp++ ) {
+ /* FIXME: The DT_DEBUG header is not mandated by the */
+ /* ELF spec. This code appears to be dependent on */
+ /* idiosynchracies of older GNU tool chains. If this code */
+ /* fails for you, the real problem is probably that it is */
+ /* being used at all. You should be getting the */
+ /* dl_iterate_phdr version. */
+ if( tag == DT_DEBUG ) {
+ struct link_map *lm
+ = ((struct r_debug *)(dp->d_un.d_ptr))->r_map;
+ if( lm != 0 ) cachedResult = lm->l_next; /* might be NIL */
+ break;
+ }
+ }
+ }
+ return cachedResult;
+}
+
+
+void GC_register_dynamic_libraries()
+{
+ struct link_map *lm;
+
+
+# ifdef HAVE_DL_ITERATE_PHDR
+ if (GC_register_dynamic_libraries_dl_iterate_phdr()) {
+ return;
+ }
+# endif
+ lm = GC_FirstDLOpenedLinkMap();
+ for (lm = GC_FirstDLOpenedLinkMap();
+ lm != (struct link_map *) 0; lm = lm->l_next)
+ {
+ ElfW(Ehdr) * e;
+ ElfW(Phdr) * p;
+ unsigned long offset;
+ char * start;
+ register int i;
+
+ e = (ElfW(Ehdr) *) lm->l_addr;
+ p = ((ElfW(Phdr) *)(((char *)(e)) + e->e_phoff));
+ offset = ((unsigned long)(lm->l_addr));
+ for( i = 0; i < (int)(e->e_phnum); ((i++),(p++)) ) {
+ switch( p->p_type ) {
+ case PT_LOAD:
+ {
+ if( !(p->p_flags & PF_W) ) break;
+ start = ((char *)(p->p_vaddr)) + offset;
+ GC_add_roots_inner(start, start + p->p_memsz, TRUE);
+ }
+ break;
+ default:
+ break;
+ }
+ }
+ }
+}
+
+#endif /* !USE_PROC_FOR_LIBRARIES */
+
+#endif /* LINUX */
+
+#if defined(IRIX5) || (defined(USE_PROC_FOR_LIBRARIES) && !defined(LINUX))
+
+#include <sys/procfs.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <elf.h>
+#include <errno.h>
+#include <signal.h> /* Only for the following test. */
+#ifndef _sigargs
+# define IRIX6
+#endif
+
+extern void * GC_roots_present();
+ /* The type is a lie, since the real type doesn't make sense here, */
+ /* and we only test for NULL. */
+
+
+/* We use /proc to track down all parts of the address space that are */
+/* mapped by the process, and throw out regions we know we shouldn't */
+/* worry about. This may also work under other SVR4 variants. */
+void GC_register_dynamic_libraries()
+{
+ static int fd = -1;
+ char buf[30];
+ static prmap_t * addr_map = 0;
+ static int current_sz = 0; /* Number of records currently in addr_map */
+ static int needed_sz; /* Required size of addr_map */
+ register int i;
+ register long flags;
+ register ptr_t start;
+ register ptr_t limit;
+ ptr_t heap_start = (ptr_t)HEAP_START;
+ ptr_t heap_end = heap_start;
+
+# ifdef SUNOS5DL
+# define MA_PHYS 0
+# endif /* SUNOS5DL */
+
+ if (fd < 0) {
+ sprintf(buf, "/proc/%d", getpid());
+ /* The above generates a lint complaint, since pid_t varies. */
+ /* It's unclear how to improve this. */
+ fd = open(buf, O_RDONLY);
+ if (fd < 0) {
+ ABORT("/proc open failed");
+ }
+ }
+ if (ioctl(fd, PIOCNMAP, &needed_sz) < 0) {
+ GC_err_printf2("fd = %d, errno = %d\n", fd, errno);
+ ABORT("/proc PIOCNMAP ioctl failed");
+ }
+ if (needed_sz >= current_sz) {
+ current_sz = needed_sz * 2 + 1;
+ /* Expansion, plus room for 0 record */
+ addr_map = (prmap_t *)GC_scratch_alloc((word)
+ (current_sz * sizeof(prmap_t)));
+ }
+ if (ioctl(fd, PIOCMAP, addr_map) < 0) {
+ GC_err_printf4("fd = %d, errno = %d, needed_sz = %d, addr_map = 0x%X\n",
+ fd, errno, needed_sz, addr_map);
+ ABORT("/proc PIOCMAP ioctl failed");
+ };
+ if (GC_n_heap_sects > 0) {
+ heap_end = GC_heap_sects[GC_n_heap_sects-1].hs_start
+ + GC_heap_sects[GC_n_heap_sects-1].hs_bytes;
+ if (heap_end < GC_scratch_last_end_ptr) heap_end = GC_scratch_last_end_ptr;
+ }
+ for (i = 0; i < needed_sz; i++) {
+ flags = addr_map[i].pr_mflags;
+ if ((flags & (MA_BREAK | MA_STACK | MA_PHYS
+ | MA_FETCHOP | MA_NOTCACHED)) != 0) goto irrelevant;
+ if ((flags & (MA_READ | MA_WRITE)) != (MA_READ | MA_WRITE))
+ goto irrelevant;
+ /* The latter test is empirically useless in very old Irix */
+ /* versions. Other than the */
+ /* main data and stack segments, everything appears to be */
+ /* mapped readable, writable, executable, and shared(!!). */
+ /* This makes no sense to me. - HB */
+ start = (ptr_t)(addr_map[i].pr_vaddr);
+ if (GC_roots_present(start)) goto irrelevant;
+ if (start < heap_end && start >= heap_start)
+ goto irrelevant;
+# ifdef MMAP_STACKS
+ if (GC_is_thread_stack(start)) goto irrelevant;
+# endif /* MMAP_STACKS */
+
+ limit = start + addr_map[i].pr_size;
+ /* The following seemed to be necessary for very old versions */
+ /* of Irix, but it has been reported to discard relevant */
+ /* segments under Irix 6.5. */
+# ifndef IRIX6
+ if (addr_map[i].pr_off == 0 && strncmp(start, ELFMAG, 4) == 0) {
+ /* Discard text segments, i.e. 0-offset mappings against */
+ /* executable files which appear to have ELF headers. */
+ caddr_t arg;
+ int obj;
+# define MAP_IRR_SZ 10
+ static ptr_t map_irr[MAP_IRR_SZ];
+ /* Known irrelevant map entries */
+ static int n_irr = 0;
+ struct stat buf;
+ register int i;
+
+ for (i = 0; i < n_irr; i++) {
+ if (map_irr[i] == start) goto irrelevant;
+ }
+ arg = (caddr_t)start;
+ obj = ioctl(fd, PIOCOPENM, &arg);
+ if (obj >= 0) {
+ fstat(obj, &buf);
+ close(obj);
+ if ((buf.st_mode & 0111) != 0) {
+ if (n_irr < MAP_IRR_SZ) {
+ map_irr[n_irr++] = start;
+ }
+ goto irrelevant;
+ }
+ }
+ }
+# endif /* !IRIX6 */
+ GC_add_roots_inner(start, limit, TRUE);
+ irrelevant: ;
+ }
+ /* Dont keep cached descriptor, for now. Some kernels don't like us */
+ /* to keep a /proc file descriptor around during kill -9. */
+ if (close(fd) < 0) ABORT("Couldnt close /proc file");
+ fd = -1;
+}
+
+# endif /* USE_PROC || IRIX5 */
+
+# if defined(MSWIN32) || defined(MSWINCE)
+
+# define WIN32_LEAN_AND_MEAN
+# define NOSERVICE
+# include <windows.h>
+# include <stdlib.h>
+
+ /* We traverse the entire address space and register all segments */
+ /* that could possibly have been written to. */
+
+ extern GC_bool GC_is_heap_base (ptr_t p);
+
+# ifdef GC_WIN32_THREADS
+ extern void GC_get_next_stack(char *start, char **lo, char **hi);
+ void GC_cond_add_roots(char *base, char * limit)
+ {
+ char * curr_base = base;
+ char * next_stack_lo;
+ char * next_stack_hi;
+
+ if (base == limit) return;
+ for(;;) {
+ GC_get_next_stack(curr_base, &next_stack_lo, &next_stack_hi);
+ if (next_stack_lo >= limit) break;
+ GC_add_roots_inner(curr_base, next_stack_lo, TRUE);
+ curr_base = next_stack_hi;
+ }
+ if (curr_base < limit) GC_add_roots_inner(curr_base, limit, TRUE);
+ }
+# else
+ void GC_cond_add_roots(char *base, char * limit)
+ {
+ char dummy;
+ char * stack_top
+ = (char *) ((word)(&dummy) & ~(GC_sysinfo.dwAllocationGranularity-1));
+ if (base == limit) return;
+ if (limit > stack_top && base < GC_stackbottom) {
+ /* Part of the stack; ignore it. */
+ return;
+ }
+ GC_add_roots_inner(base, limit, TRUE);
+ }
+# endif
+
+# ifdef MSWINCE
+ /* Do we need to separately register the main static data segment? */
+ GC_bool GC_register_main_static_data()
+ {
+ return FALSE;
+ }
+# else /* win32 */
+ extern GC_bool GC_no_win32_dlls;
+
+ GC_bool GC_register_main_static_data()
+ {
+ return GC_no_win32_dlls;
+ }
+# endif /* win32 */
+
+# define HAVE_REGISTER_MAIN_STATIC_DATA
+
+ /* The frame buffer testing code is dead in this version. */
+ /* We leave it here temporarily in case the switch to just */
+ /* testing for MEM_IMAGE sections causes un expected */
+ /* problems. */
+ GC_bool GC_warn_fb = TRUE; /* Warn about traced likely */
+ /* graphics memory. */
+ GC_bool GC_disallow_ignore_fb = FALSE;
+ int GC_ignore_fb_mb; /* Ignore mappings bigger than the */
+ /* specified number of MB. */
+ GC_bool GC_ignore_fb = FALSE; /* Enable frame buffer */
+ /* checking. */
+
+ /* Issue warning if tracing apparent framebuffer. */
+ /* This limits us to one warning, and it's a back door to */
+ /* disable that. */
+
+ /* Should [start, start+len) be treated as a frame buffer */
+ /* and ignored? */
+ /* Unfortunately, we currently are not quite sure how to tell */
+ /* this automatically, and rely largely on user input. */
+ /* We expect that any mapping with type MEM_MAPPED (which */
+ /* apparently excludes library data sections) can be safely */
+ /* ignored. But we're too chicken to do that in this */
+ /* version. */
+ /* Based on a very limited sample, it appears that: */
+ /* - Frame buffer mappings appear as mappings of large */
+ /* length, usually a bit less than a power of two. */
+ /* - The definition of "a bit less" in the above cannot */
+ /* be made more precise. */
+ /* - Have a starting address at best 64K aligned. */
+ /* - Have type == MEM_MAPPED. */
+ static GC_bool is_frame_buffer(ptr_t start, size_t len, DWORD tp)
+ {
+ static GC_bool initialized = FALSE;
+# define MB (1024*1024)
+# define DEFAULT_FB_MB 15
+# define MIN_FB_MB 3
+
+ if (GC_disallow_ignore_fb || tp != MEM_MAPPED) return FALSE;
+ if (!initialized) {
+ char * ignore_fb_string = GETENV("GC_IGNORE_FB");
+
+ if (0 != ignore_fb_string) {
+ while (*ignore_fb_string == ' ' || *ignore_fb_string == '\t')
+ ++ignore_fb_string;
+ if (*ignore_fb_string == '\0') {
+ GC_ignore_fb_mb = DEFAULT_FB_MB;
+ } else {
+ GC_ignore_fb_mb = atoi(ignore_fb_string);
+ if (GC_ignore_fb_mb < MIN_FB_MB) {
+ WARN("Bad GC_IGNORE_FB value. Using %ld\n", DEFAULT_FB_MB);
+ GC_ignore_fb_mb = DEFAULT_FB_MB;
+ }
+ }
+ GC_ignore_fb = TRUE;
+ } else {
+ GC_ignore_fb_mb = DEFAULT_FB_MB; /* For warning */
+ }
+ initialized = TRUE;
+ }
+ if (len >= ((size_t)GC_ignore_fb_mb << 20)) {
+ if (GC_ignore_fb) {
+ return TRUE;
+ } else {
+ if (GC_warn_fb) {
+ WARN("Possible frame buffer mapping at 0x%lx: \n"
+ "\tConsider setting GC_IGNORE_FB to improve performance.\n",
+ start);
+ GC_warn_fb = FALSE;
+ }
+ return FALSE;
+ }
+ } else {
+ return FALSE;
+ }
+ }
+
+# ifdef DEBUG_VIRTUALQUERY
+ void GC_dump_meminfo(MEMORY_BASIC_INFORMATION *buf)
+ {
+ GC_printf4("BaseAddress = %lx, AllocationBase = %lx, RegionSize = %lx(%lu)\n",
+ buf -> BaseAddress, buf -> AllocationBase, buf -> RegionSize,
+ buf -> RegionSize);
+ GC_printf4("\tAllocationProtect = %lx, State = %lx, Protect = %lx, "
+ "Type = %lx\n",
+ buf -> AllocationProtect, buf -> State, buf -> Protect,
+ buf -> Type);
+ }
+# endif /* DEBUG_VIRTUALQUERY */
+
+ extern GC_bool GC_wnt; /* Is Windows NT derivative. */
+ /* Defined and set in os_dep.c. */
+
+ void GC_register_dynamic_libraries()
+ {
+ MEMORY_BASIC_INFORMATION buf;
+ DWORD result;
+ DWORD protect;
+ LPVOID p;
+ char * base;
+ char * limit, * new_limit;
+
+# ifdef MSWIN32
+ if (GC_no_win32_dlls) return;
+# endif
+ base = limit = p = GC_sysinfo.lpMinimumApplicationAddress;
+# if defined(MSWINCE) && !defined(_WIN32_WCE_EMULATION)
+ /* Only the first 32 MB of address space belongs to the current process */
+ while (p < (LPVOID)0x02000000) {
+ result = VirtualQuery(p, &buf, sizeof(buf));
+ if (result == 0) {
+ /* Page is free; advance to the next possible allocation base */
+ new_limit = (char *)
+ (((DWORD) p + GC_sysinfo.dwAllocationGranularity)
+ & ~(GC_sysinfo.dwAllocationGranularity-1));
+ } else
+# else
+ while (p < GC_sysinfo.lpMaximumApplicationAddress) {
+ result = VirtualQuery(p, &buf, sizeof(buf));
+# endif
+ {
+ if (result != sizeof(buf)) {
+ ABORT("Weird VirtualQuery result");
+ }
+ new_limit = (char *)p + buf.RegionSize;
+ protect = buf.Protect;
+ if (buf.State == MEM_COMMIT
+ && (protect == PAGE_EXECUTE_READWRITE
+ || protect == PAGE_READWRITE)
+ && !GC_is_heap_base(buf.AllocationBase)
+ /* This used to check for
+ * !is_frame_buffer(p, buf.RegionSize, buf.Type)
+ * instead of just checking for MEM_IMAGE.
+ * If something breaks, change it back. */
+ /* There is some evidence that we cannot always
+ * ignore MEM_PRIVATE sections under Windows ME
+ * and predecessors. Hence we now also check for
+ * that case. */
+ && (buf.Type == MEM_IMAGE ||
+ !GC_wnt && buf.Type == MEM_PRIVATE)) {
+# ifdef DEBUG_VIRTUALQUERY
+ GC_dump_meminfo(&buf);
+# endif
+ if ((char *)p != limit) {
+ GC_cond_add_roots(base, limit);
+ base = p;
+ }
+ limit = new_limit;
+ }
+ }
+ if (p > (LPVOID)new_limit /* overflow */) break;
+ p = (LPVOID)new_limit;
+ }
+ GC_cond_add_roots(base, limit);
+ }
+
+#endif /* MSWIN32 || MSWINCE */
+
+#if defined(ALPHA) && defined(OSF1)
+
+#include <loader.h>
+
+void GC_register_dynamic_libraries()
+{
+ int status;
+ ldr_process_t mypid;
+
+ /* module */
+ ldr_module_t moduleid = LDR_NULL_MODULE;
+ ldr_module_info_t moduleinfo;
+ size_t moduleinfosize = sizeof(moduleinfo);
+ size_t modulereturnsize;
+
+ /* region */
+ ldr_region_t region;
+ ldr_region_info_t regioninfo;
+ size_t regioninfosize = sizeof(regioninfo);
+ size_t regionreturnsize;
+
+ /* Obtain id of this process */
+ mypid = ldr_my_process();
+
+ /* For each module */
+ while (TRUE) {
+
+ /* Get the next (first) module */
+ status = ldr_next_module(mypid, &moduleid);
+
+ /* Any more modules? */
+ if (moduleid == LDR_NULL_MODULE)
+ break; /* No more modules */
+
+ /* Check status AFTER checking moduleid because */
+ /* of a bug in the non-shared ldr_next_module stub */
+ if (status != 0 ) {
+ GC_printf1("dynamic_load: status = %ld\n", (long)status);
+ {
+ extern char *sys_errlist[];
+ extern int sys_nerr;
+ extern int errno;
+ if (errno <= sys_nerr) {
+ GC_printf1("dynamic_load: %s\n", (long)sys_errlist[errno]);
+ } else {
+ GC_printf1("dynamic_load: %d\n", (long)errno);
+ }
+ }
+ ABORT("ldr_next_module failed");
+ }
+
+ /* Get the module information */
+ status = ldr_inq_module(mypid, moduleid, &moduleinfo,
+ moduleinfosize, &modulereturnsize);
+ if (status != 0 )
+ ABORT("ldr_inq_module failed");
+
+ /* is module for the main program (i.e. nonshared portion)? */
+ if (moduleinfo.lmi_flags & LDR_MAIN)
+ continue; /* skip the main module */
+
+# ifdef VERBOSE
+ GC_printf("---Module---\n");
+ GC_printf("Module ID = %16ld\n", moduleinfo.lmi_modid);
+ GC_printf("Count of regions = %16d\n", moduleinfo.lmi_nregion);
+ GC_printf("flags for module = %16lx\n", moduleinfo.lmi_flags);
+ GC_printf("pathname of module = \"%s\"\n", moduleinfo.lmi_name);
+# endif
+
+ /* For each region in this module */
+ for (region = 0; region < moduleinfo.lmi_nregion; region++) {
+
+ /* Get the region information */
+ status = ldr_inq_region(mypid, moduleid, region, ®ioninfo,
+ regioninfosize, ®ionreturnsize);
+ if (status != 0 )
+ ABORT("ldr_inq_region failed");
+
+ /* only process writable (data) regions */
+ if (! (regioninfo.lri_prot & LDR_W))
+ continue;
+
+# ifdef VERBOSE
+ GC_printf("--- Region ---\n");
+ GC_printf("Region number = %16ld\n",
+ regioninfo.lri_region_no);
+ GC_printf("Protection flags = %016x\n", regioninfo.lri_prot);
+ GC_printf("Virtual address = %16p\n", regioninfo.lri_vaddr);
+ GC_printf("Mapped address = %16p\n", regioninfo.lri_mapaddr);
+ GC_printf("Region size = %16ld\n", regioninfo.lri_size);
+ GC_printf("Region name = \"%s\"\n", regioninfo.lri_name);
+# endif
+
+ /* register region as a garbage collection root */
+ GC_add_roots_inner (
+ (char *)regioninfo.lri_mapaddr,
+ (char *)regioninfo.lri_mapaddr + regioninfo.lri_size,
+ TRUE);
+
+ }
+ }
+}
+#endif
+
+#if defined(HPUX)
+
+#include <errno.h>
+#include <dl.h>
+
+extern int errno;
+extern char *sys_errlist[];
+extern int sys_nerr;
+
+void GC_register_dynamic_libraries()
+{
+ int status;
+ int index = 1; /* Ordinal position in shared library search list */
+ struct shl_descriptor *shl_desc; /* Shared library info, see dl.h */
+
+ /* For each dynamic library loaded */
+ while (TRUE) {
+
+ /* Get info about next shared library */
+ status = shl_get(index, &shl_desc);
+
+ /* Check if this is the end of the list or if some error occured */
+ if (status != 0) {
+# ifdef GC_HPUX_THREADS
+ /* I've seen errno values of 0. The man page is not clear */
+ /* as to whether errno should get set on a -1 return. */
+ break;
+# else
+ if (errno == EINVAL) {
+ break; /* Moved past end of shared library list --> finished */
+ } else {
+ if (errno <= sys_nerr) {
+ GC_printf1("dynamic_load: %s\n", (long) sys_errlist[errno]);
+ } else {
+ GC_printf1("dynamic_load: %d\n", (long) errno);
+ }
+ ABORT("shl_get failed");
+ }
+# endif
+ }
+
+# ifdef VERBOSE
+ GC_printf0("---Shared library---\n");
+ GC_printf1("\tfilename = \"%s\"\n", shl_desc->filename);
+ GC_printf1("\tindex = %d\n", index);
+ GC_printf1("\thandle = %08x\n",
+ (unsigned long) shl_desc->handle);
+ GC_printf1("\ttext seg. start = %08x\n", shl_desc->tstart);
+ GC_printf1("\ttext seg. end = %08x\n", shl_desc->tend);
+ GC_printf1("\tdata seg. start = %08x\n", shl_desc->dstart);
+ GC_printf1("\tdata seg. end = %08x\n", shl_desc->dend);
+ GC_printf1("\tref. count = %lu\n", shl_desc->ref_count);
+# endif
+
+ /* register shared library's data segment as a garbage collection root */
+ GC_add_roots_inner((char *) shl_desc->dstart,
+ (char *) shl_desc->dend, TRUE);
+
+ index++;
+ }
+}
+#endif /* HPUX */
+
+#ifdef RS6000
+#pragma alloca
+#include <sys/ldr.h>
+#include <sys/errno.h>
+void GC_register_dynamic_libraries()
+{
+ int len;
+ char *ldibuf;
+ int ldibuflen;
+ struct ld_info *ldi;
+
+ ldibuf = alloca(ldibuflen = 8192);
+
+ while ( (len = loadquery(L_GETINFO,ldibuf,ldibuflen)) < 0) {
+ if (errno != ENOMEM) {
+ ABORT("loadquery failed");
+ }
+ ldibuf = alloca(ldibuflen *= 2);
+ }
+
+ ldi = (struct ld_info *)ldibuf;
+ while (ldi) {
+ len = ldi->ldinfo_next;
+ GC_add_roots_inner(
+ ldi->ldinfo_dataorg,
+ (ptr_t)(unsigned long)ldi->ldinfo_dataorg
+ + ldi->ldinfo_datasize,
+ TRUE);
+ ldi = len ? (struct ld_info *)((char *)ldi + len) : 0;
+ }
+}
+#endif /* RS6000 */
+
+#ifdef DARWIN
+
+/* __private_extern__ hack required for pre-3.4 gcc versions. */
+#ifndef __private_extern__
+# define __private_extern__ extern
+# include <mach-o/dyld.h>
+# undef __private_extern__
+#else
+# include <mach-o/dyld.h>
+#endif
+#include <mach-o/getsect.h>
+
+/*#define DARWIN_DEBUG*/
+
+const static struct {
+ const char *seg;
+ const char *sect;
+} GC_dyld_sections[] = {
+ { SEG_DATA, SECT_DATA },
+ { SEG_DATA, SECT_BSS },
+ { SEG_DATA, SECT_COMMON }
+};
+
+#ifdef DARWIN_DEBUG
+static const char *GC_dyld_name_for_hdr(const struct GC_MACH_HEADER *hdr) {
+ unsigned long i,c;
+ c = _dyld_image_count();
+ for(i=0;i<c;i++) if(_dyld_get_image_header(i) == hdr)
+ return _dyld_get_image_name(i);
+ return NULL;
+}
+#endif
+
+/* This should never be called by a thread holding the lock */
+static void GC_dyld_image_add(const struct GC_MACH_HEADER *hdr, intptr_t slide)
+{
+ unsigned long start,end,i;
+ const struct GC_MACH_SECTION *sec;
+ if (GC_no_dls) return;
+ for(i=0;i<sizeof(GC_dyld_sections)/sizeof(GC_dyld_sections[0]);i++) {
+# if defined (__LP64__)
+ sec = getsectbynamefromheader_64(
+# else
+ sec = getsectbynamefromheader(
+# endif
+ hdr,GC_dyld_sections[i].seg,GC_dyld_sections[i].sect);
+ if(sec == NULL || sec->size == 0) continue;
+ start = slide + sec->addr;
+ end = start + sec->size;
+# ifdef DARWIN_DEBUG
+ GC_printf4("Adding section at %p-%p (%lu bytes) from image %s\n",
+ start,end,sec->size,GC_dyld_name_for_hdr(hdr));
+# endif
+ GC_add_roots((char*)start,(char*)end);
+ }
+# ifdef DARWIN_DEBUG
+ GC_print_static_roots();
+# endif
+}
+
+/* This should never be called by a thread holding the lock */
+static void GC_dyld_image_remove(const struct GC_MACH_HEADER *hdr,
+ intptr_t slide) {
+ unsigned long start,end,i;
+ const struct GC_MACH_SECTION *sec;
+ for(i=0;i<sizeof(GC_dyld_sections)/sizeof(GC_dyld_sections[0]);i++) {
+# if defined (__LP64__)
+ sec = getsectbynamefromheader_64(
+# else
+ sec = getsectbynamefromheader(
+# endif
+ hdr,GC_dyld_sections[i].seg,GC_dyld_sections[i].sect);
+ if(sec == NULL || sec->size == 0) continue;
+ start = slide + sec->addr;
+ end = start + sec->size;
+# ifdef DARWIN_DEBUG
+ GC_printf4("Removing section at %p-%p (%lu bytes) from image %s\n",
+ start,end,sec->size,GC_dyld_name_for_hdr(hdr));
+# endif
+ GC_remove_roots((char*)start,(char*)end);
+ }
+# ifdef DARWIN_DEBUG
+ GC_print_static_roots();
+# endif
+}
+
+void GC_register_dynamic_libraries() {
+ /* Currently does nothing. The callbacks are setup by GC_init_dyld()
+ The dyld library takes it from there. */
+}
+
+/* The _dyld_* functions have an internal lock so no _dyld functions
+ can be called while the world is stopped without the risk of a deadlock.
+ Because of this we MUST setup callbacks BEFORE we ever stop the world.
+ This should be called BEFORE any thread in created and WITHOUT the
+ allocation lock held. */
+
+void GC_init_dyld() {
+ static GC_bool initialized = FALSE;
+ char *bind_fully_env = NULL;
+
+ if(initialized) return;
+
+# ifdef DARWIN_DEBUG
+ GC_printf0("Registering dyld callbacks...\n");
+# endif
+
+ /* Apple's Documentation:
+ When you call _dyld_register_func_for_add_image, the dynamic linker runtime
+ calls the specified callback (func) once for each of the images that is
+ currently loaded into the program. When a new image is added to the program,
+ your callback is called again with the mach_header for the new image, and the
+ virtual memory slide amount of the new image.
+
+ This WILL properly register already linked libraries and libraries
+ linked in the future
+ */
+
+ _dyld_register_func_for_add_image(GC_dyld_image_add);
+ _dyld_register_func_for_remove_image(GC_dyld_image_remove);
+
+ /* Set this early to avoid reentrancy issues. */
+ initialized = TRUE;
+
+ bind_fully_env = getenv("DYLD_BIND_AT_LAUNCH");
+
+ if (bind_fully_env == NULL) {
+# ifdef DARWIN_DEBUG
+ GC_printf0("Forcing full bind of GC code...\n");
+# endif
+
+ if(!_dyld_bind_fully_image_containing_address((unsigned long*)GC_malloc))
+ GC_abort("_dyld_bind_fully_image_containing_address failed");
+ }
+
+}
+
+#define HAVE_REGISTER_MAIN_STATIC_DATA
+GC_bool GC_register_main_static_data()
+{
+ /* Already done through dyld callbacks */
+ return FALSE;
+}
+
+#endif /* DARWIN */
+
+#else /* !DYNAMIC_LOADING */
+
+#ifdef PCR
+
+# include "il/PCR_IL.h"
+# include "th/PCR_ThCtl.h"
+# include "mm/PCR_MM.h"
+
+void GC_register_dynamic_libraries()
+{
+ /* Add new static data areas of dynamically loaded modules. */
+ {
+ PCR_IL_LoadedFile * p = PCR_IL_GetLastLoadedFile();
+ PCR_IL_LoadedSegment * q;
+
+ /* Skip uncommited files */
+ while (p != NIL && !(p -> lf_commitPoint)) {
+ /* The loading of this file has not yet been committed */
+ /* Hence its description could be inconsistent. */
+ /* Furthermore, it hasn't yet been run. Hence its data */
+ /* segments can't possibly reference heap allocated */
+ /* objects. */
+ p = p -> lf_prev;
+ }
+ for (; p != NIL; p = p -> lf_prev) {
+ for (q = p -> lf_ls; q != NIL; q = q -> ls_next) {
+ if ((q -> ls_flags & PCR_IL_SegFlags_Traced_MASK)
+ == PCR_IL_SegFlags_Traced_on) {
+ GC_add_roots_inner
+ ((char *)(q -> ls_addr),
+ (char *)(q -> ls_addr) + q -> ls_bytes,
+ TRUE);
+ }
+ }
+ }
+ }
+}
+
+
+#else /* !PCR */
+
+void GC_register_dynamic_libraries(){}
+
+int GC_no_dynamic_loading;
+
+#endif /* !PCR */
+
+#endif /* !DYNAMIC_LOADING */
+
+#ifndef HAVE_REGISTER_MAIN_STATIC_DATA
+
+/* Do we need to separately register the main static data segment? */
+GC_bool GC_register_main_static_data()
+{
+ return TRUE;
+}
+#endif /* HAVE_REGISTER_MAIN_STATIC_DATA */
+
Added: llvm-gcc-4.2/trunk/boehm-gc/finalize.c
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/finalize.c?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/finalize.c (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/finalize.c Thu Nov 8 16:56:19 2007
@@ -0,0 +1,898 @@
+/*
+ * Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
+ * Copyright (c) 1991-1996 by Xerox Corporation. All rights reserved.
+ * Copyright (c) 1996-1999 by Silicon Graphics. All rights reserved.
+
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ */
+/* Boehm, February 1, 1996 1:19 pm PST */
+# define I_HIDE_POINTERS
+# include "private/gc_pmark.h"
+
+# ifdef FINALIZE_ON_DEMAND
+ int GC_finalize_on_demand = 1;
+# else
+ int GC_finalize_on_demand = 0;
+# endif
+
+# ifdef JAVA_FINALIZATION
+ int GC_java_finalization = 1;
+# else
+ int GC_java_finalization = 0;
+# endif
+
+/* Type of mark procedure used for marking from finalizable object. */
+/* This procedure normally does not mark the object, only its */
+/* descendents. */
+typedef void finalization_mark_proc(/* ptr_t finalizable_obj_ptr */);
+
+# define HASH3(addr,size,log_size) \
+ ((((word)(addr) >> 3) ^ ((word)(addr) >> (3+(log_size)))) \
+ & ((size) - 1))
+#define HASH2(addr,log_size) HASH3(addr, 1 << log_size, log_size)
+
+struct hash_chain_entry {
+ word hidden_key;
+ struct hash_chain_entry * next;
+};
+
+unsigned GC_finalization_failures = 0;
+ /* Number of finalization requests that failed for lack of memory. */
+
+static struct disappearing_link {
+ struct hash_chain_entry prolog;
+# define dl_hidden_link prolog.hidden_key
+ /* Field to be cleared. */
+# define dl_next(x) (struct disappearing_link *)((x) -> prolog.next)
+# define dl_set_next(x,y) (x) -> prolog.next = (struct hash_chain_entry *)(y)
+
+ word dl_hidden_obj; /* Pointer to object base */
+} **dl_head = 0;
+
+static signed_word log_dl_table_size = -1;
+ /* Binary log of */
+ /* current size of array pointed to by dl_head. */
+ /* -1 ==> size is 0. */
+
+word GC_dl_entries = 0; /* Number of entries currently in disappearing */
+ /* link table. */
+
+static struct finalizable_object {
+ struct hash_chain_entry prolog;
+# define fo_hidden_base prolog.hidden_key
+ /* Pointer to object base. */
+ /* No longer hidden once object */
+ /* is on finalize_now queue. */
+# define fo_next(x) (struct finalizable_object *)((x) -> prolog.next)
+# define fo_set_next(x,y) (x) -> prolog.next = (struct hash_chain_entry *)(y)
+ GC_finalization_proc fo_fn; /* Finalizer. */
+ ptr_t fo_client_data;
+ word fo_object_size; /* In bytes. */
+ finalization_mark_proc * fo_mark_proc; /* Mark-through procedure */
+} **fo_head = 0;
+
+struct finalizable_object * GC_finalize_now = 0;
+ /* LIst of objects that should be finalized now. */
+
+static signed_word log_fo_table_size = -1;
+
+word GC_fo_entries = 0;
+
+void GC_push_finalizer_structures GC_PROTO((void))
+{
+ GC_push_all((ptr_t)(&dl_head), (ptr_t)(&dl_head) + sizeof(word));
+ GC_push_all((ptr_t)(&fo_head), (ptr_t)(&fo_head) + sizeof(word));
+ GC_push_all((ptr_t)(&GC_finalize_now),
+ (ptr_t)(&GC_finalize_now) + sizeof(word));
+}
+
+/* Double the size of a hash table. *size_ptr is the log of its current */
+/* size. May be a noop. */
+/* *table is a pointer to an array of hash headers. If we succeed, we */
+/* update both *table and *log_size_ptr. */
+/* Lock is held. Signals are disabled. */
+void GC_grow_table(table, log_size_ptr)
+struct hash_chain_entry ***table;
+signed_word * log_size_ptr;
+{
+ register word i;
+ register struct hash_chain_entry *p;
+ int log_old_size = *log_size_ptr;
+ register int log_new_size = log_old_size + 1;
+ word old_size = ((log_old_size == -1)? 0: (1 << log_old_size));
+ register word new_size = 1 << log_new_size;
+ struct hash_chain_entry **new_table = (struct hash_chain_entry **)
+ GC_INTERNAL_MALLOC_IGNORE_OFF_PAGE(
+ (size_t)new_size * sizeof(struct hash_chain_entry *), NORMAL);
+
+ if (new_table == 0) {
+ if (table == 0) {
+ ABORT("Insufficient space for initial table allocation");
+ } else {
+ return;
+ }
+ }
+ for (i = 0; i < old_size; i++) {
+ p = (*table)[i];
+ while (p != 0) {
+ register ptr_t real_key = (ptr_t)REVEAL_POINTER(p -> hidden_key);
+ register struct hash_chain_entry *next = p -> next;
+ register int new_hash = HASH3(real_key, new_size, log_new_size);
+
+ p -> next = new_table[new_hash];
+ new_table[new_hash] = p;
+ p = next;
+ }
+ }
+ *log_size_ptr = log_new_size;
+ *table = new_table;
+}
+
+# if defined(__STDC__) || defined(__cplusplus)
+ int GC_register_disappearing_link(GC_PTR * link)
+# else
+ int GC_register_disappearing_link(link)
+ GC_PTR * link;
+# endif
+{
+ ptr_t base;
+
+ base = (ptr_t)GC_base((GC_PTR)link);
+ if (base == 0)
+ ABORT("Bad arg to GC_register_disappearing_link");
+ return(GC_general_register_disappearing_link(link, base));
+}
+
+# if defined(__STDC__) || defined(__cplusplus)
+ int GC_general_register_disappearing_link(GC_PTR * link,
+ GC_PTR obj)
+# else
+ int GC_general_register_disappearing_link(link, obj)
+ GC_PTR * link;
+ GC_PTR obj;
+# endif
+
+{
+ struct disappearing_link *curr_dl;
+ int index;
+ struct disappearing_link * new_dl;
+ DCL_LOCK_STATE;
+
+ if ((word)link & (ALIGNMENT-1))
+ ABORT("Bad arg to GC_general_register_disappearing_link");
+# ifdef THREADS
+ DISABLE_SIGNALS();
+ LOCK();
+# endif
+ if (log_dl_table_size == -1
+ || GC_dl_entries > ((word)1 << log_dl_table_size)) {
+# ifndef THREADS
+ DISABLE_SIGNALS();
+# endif
+ GC_grow_table((struct hash_chain_entry ***)(&dl_head),
+ &log_dl_table_size);
+# ifdef CONDPRINT
+ if (GC_print_stats) {
+ GC_printf1("Grew dl table to %lu entries\n",
+ (unsigned long)(1 << log_dl_table_size));
+ }
+# endif
+# ifndef THREADS
+ ENABLE_SIGNALS();
+# endif
+ }
+ index = HASH2(link, log_dl_table_size);
+ curr_dl = dl_head[index];
+ for (curr_dl = dl_head[index]; curr_dl != 0; curr_dl = dl_next(curr_dl)) {
+ if (curr_dl -> dl_hidden_link == HIDE_POINTER(link)) {
+ curr_dl -> dl_hidden_obj = HIDE_POINTER(obj);
+# ifdef THREADS
+ UNLOCK();
+ ENABLE_SIGNALS();
+# endif
+ return(1);
+ }
+ }
+ new_dl = (struct disappearing_link *)
+ GC_INTERNAL_MALLOC(sizeof(struct disappearing_link),NORMAL);
+ if (0 == new_dl) {
+# ifdef THREADS
+ UNLOCK();
+ ENABLE_SIGNALS();
+# endif
+ new_dl = (struct disappearing_link *)
+ GC_oom_fn(sizeof(struct disappearing_link));
+ if (0 == new_dl) {
+ GC_finalization_failures++;
+ return(0);
+ }
+ /* It's not likely we'll make it here, but ... */
+# ifdef THREADS
+ DISABLE_SIGNALS();
+ LOCK();
+# endif
+ }
+ new_dl -> dl_hidden_obj = HIDE_POINTER(obj);
+ new_dl -> dl_hidden_link = HIDE_POINTER(link);
+ dl_set_next(new_dl, dl_head[index]);
+ dl_head[index] = new_dl;
+ GC_dl_entries++;
+# ifdef THREADS
+ UNLOCK();
+ ENABLE_SIGNALS();
+# endif
+ return(0);
+}
+
+# if defined(__STDC__) || defined(__cplusplus)
+ int GC_unregister_disappearing_link(GC_PTR * link)
+# else
+ int GC_unregister_disappearing_link(link)
+ GC_PTR * link;
+# endif
+{
+ struct disappearing_link *curr_dl, *prev_dl;
+ int index;
+ DCL_LOCK_STATE;
+
+ DISABLE_SIGNALS();
+ LOCK();
+ index = HASH2(link, log_dl_table_size);
+ if (((unsigned long)link & (ALIGNMENT-1))) goto out;
+ prev_dl = 0; curr_dl = dl_head[index];
+ while (curr_dl != 0) {
+ if (curr_dl -> dl_hidden_link == HIDE_POINTER(link)) {
+ if (prev_dl == 0) {
+ dl_head[index] = dl_next(curr_dl);
+ } else {
+ dl_set_next(prev_dl, dl_next(curr_dl));
+ }
+ GC_dl_entries--;
+ UNLOCK();
+ ENABLE_SIGNALS();
+# ifdef DBG_HDRS_ALL
+ dl_set_next(curr_dl, 0);
+# else
+ GC_free((GC_PTR)curr_dl);
+# endif
+ return(1);
+ }
+ prev_dl = curr_dl;
+ curr_dl = dl_next(curr_dl);
+ }
+out:
+ UNLOCK();
+ ENABLE_SIGNALS();
+ return(0);
+}
+
+/* Possible finalization_marker procedures. Note that mark stack */
+/* overflow is handled by the caller, and is not a disaster. */
+GC_API void GC_normal_finalize_mark_proc(p)
+ptr_t p;
+{
+ hdr * hhdr = HDR(p);
+
+ PUSH_OBJ((word *)p, hhdr, GC_mark_stack_top,
+ &(GC_mark_stack[GC_mark_stack_size]));
+}
+
+/* This only pays very partial attention to the mark descriptor. */
+/* It does the right thing for normal and atomic objects, and treats */
+/* most others as normal. */
+GC_API void GC_ignore_self_finalize_mark_proc(p)
+ptr_t p;
+{
+ hdr * hhdr = HDR(p);
+ word descr = hhdr -> hb_descr;
+ ptr_t q, r;
+ ptr_t scan_limit;
+ ptr_t target_limit = p + WORDS_TO_BYTES(hhdr -> hb_sz) - 1;
+
+ if ((descr & GC_DS_TAGS) == GC_DS_LENGTH) {
+ scan_limit = p + descr - sizeof(word);
+ } else {
+ scan_limit = target_limit + 1 - sizeof(word);
+ }
+ for (q = p; q <= scan_limit; q += ALIGNMENT) {
+ r = *(ptr_t *)q;
+ if (r < p || r > target_limit) {
+ GC_PUSH_ONE_HEAP((word)r, q);
+ }
+ }
+}
+
+/*ARGSUSED*/
+GC_API void GC_null_finalize_mark_proc(p)
+ptr_t p;
+{
+}
+
+
+
+/* Register a finalization function. See gc.h for details. */
+/* in the nonthreads case, we try to avoid disabling signals, */
+/* since it can be expensive. Threads packages typically */
+/* make it cheaper. */
+/* The last parameter is a procedure that determines */
+/* marking for finalization ordering. Any objects marked */
+/* by that procedure will be guaranteed to not have been */
+/* finalized when this finalizer is invoked. */
+GC_API void GC_register_finalizer_inner(obj, fn, cd, ofn, ocd, mp)
+GC_PTR obj;
+GC_finalization_proc fn;
+GC_PTR cd;
+GC_finalization_proc * ofn;
+GC_PTR * ocd;
+finalization_mark_proc * mp;
+{
+ ptr_t base;
+ struct finalizable_object * curr_fo, * prev_fo;
+ int index;
+ struct finalizable_object *new_fo;
+ hdr *hhdr;
+ DCL_LOCK_STATE;
+
+# ifdef THREADS
+ DISABLE_SIGNALS();
+ LOCK();
+# endif
+ if (log_fo_table_size == -1
+ || GC_fo_entries > ((word)1 << log_fo_table_size)) {
+# ifndef THREADS
+ DISABLE_SIGNALS();
+# endif
+ GC_grow_table((struct hash_chain_entry ***)(&fo_head),
+ &log_fo_table_size);
+# ifdef CONDPRINT
+ if (GC_print_stats) {
+ GC_printf1("Grew fo table to %lu entries\n",
+ (unsigned long)(1 << log_fo_table_size));
+ }
+# endif
+# ifndef THREADS
+ ENABLE_SIGNALS();
+# endif
+ }
+ /* in the THREADS case signals are disabled and we hold allocation */
+ /* lock; otherwise neither is true. Proceed carefully. */
+ base = (ptr_t)obj;
+ index = HASH2(base, log_fo_table_size);
+ prev_fo = 0; curr_fo = fo_head[index];
+ while (curr_fo != 0) {
+ if (curr_fo -> fo_hidden_base == HIDE_POINTER(base)) {
+ /* Interruption by a signal in the middle of this */
+ /* should be safe. The client may see only *ocd */
+ /* updated, but we'll declare that to be his */
+ /* problem. */
+ if (ocd) *ocd = (GC_PTR) curr_fo -> fo_client_data;
+ if (ofn) *ofn = curr_fo -> fo_fn;
+ /* Delete the structure for base. */
+ if (prev_fo == 0) {
+ fo_head[index] = fo_next(curr_fo);
+ } else {
+ fo_set_next(prev_fo, fo_next(curr_fo));
+ }
+ if (fn == 0) {
+ GC_fo_entries--;
+ /* May not happen if we get a signal. But a high */
+ /* estimate will only make the table larger than */
+ /* necessary. */
+# if !defined(THREADS) && !defined(DBG_HDRS_ALL)
+ GC_free((GC_PTR)curr_fo);
+# endif
+ } else {
+ curr_fo -> fo_fn = fn;
+ curr_fo -> fo_client_data = (ptr_t)cd;
+ curr_fo -> fo_mark_proc = mp;
+ /* Reinsert it. We deleted it first to maintain */
+ /* consistency in the event of a signal. */
+ if (prev_fo == 0) {
+ fo_head[index] = curr_fo;
+ } else {
+ fo_set_next(prev_fo, curr_fo);
+ }
+ }
+# ifdef THREADS
+ UNLOCK();
+ ENABLE_SIGNALS();
+# endif
+ return;
+ }
+ prev_fo = curr_fo;
+ curr_fo = fo_next(curr_fo);
+ }
+ if (ofn) *ofn = 0;
+ if (ocd) *ocd = 0;
+ if (fn == 0) {
+# ifdef THREADS
+ UNLOCK();
+ ENABLE_SIGNALS();
+# endif
+ return;
+ }
+ GET_HDR(base, hhdr);
+ if (0 == hhdr) {
+ /* We won't collect it, hence finalizer wouldn't be run. */
+# ifdef THREADS
+ UNLOCK();
+ ENABLE_SIGNALS();
+# endif
+ return;
+ }
+ new_fo = (struct finalizable_object *)
+ GC_INTERNAL_MALLOC(sizeof(struct finalizable_object),NORMAL);
+ if (0 == new_fo) {
+# ifdef THREADS
+ UNLOCK();
+ ENABLE_SIGNALS();
+# endif
+ new_fo = (struct finalizable_object *)
+ GC_oom_fn(sizeof(struct finalizable_object));
+ if (0 == new_fo) {
+ GC_finalization_failures++;
+ return;
+ }
+ /* It's not likely we'll make it here, but ... */
+# ifdef THREADS
+ DISABLE_SIGNALS();
+ LOCK();
+# endif
+ }
+ new_fo -> fo_hidden_base = (word)HIDE_POINTER(base);
+ new_fo -> fo_fn = fn;
+ new_fo -> fo_client_data = (ptr_t)cd;
+ new_fo -> fo_object_size = hhdr -> hb_sz;
+ new_fo -> fo_mark_proc = mp;
+ fo_set_next(new_fo, fo_head[index]);
+ GC_fo_entries++;
+ fo_head[index] = new_fo;
+# ifdef THREADS
+ UNLOCK();
+ ENABLE_SIGNALS();
+# endif
+}
+
+# if defined(__STDC__)
+ void GC_register_finalizer(void * obj,
+ GC_finalization_proc fn, void * cd,
+ GC_finalization_proc *ofn, void ** ocd)
+# else
+ void GC_register_finalizer(obj, fn, cd, ofn, ocd)
+ GC_PTR obj;
+ GC_finalization_proc fn;
+ GC_PTR cd;
+ GC_finalization_proc * ofn;
+ GC_PTR * ocd;
+# endif
+{
+ GC_register_finalizer_inner(obj, fn, cd, ofn,
+ ocd, GC_normal_finalize_mark_proc);
+}
+
+# if defined(__STDC__)
+ void GC_register_finalizer_ignore_self(void * obj,
+ GC_finalization_proc fn, void * cd,
+ GC_finalization_proc *ofn, void ** ocd)
+# else
+ void GC_register_finalizer_ignore_self(obj, fn, cd, ofn, ocd)
+ GC_PTR obj;
+ GC_finalization_proc fn;
+ GC_PTR cd;
+ GC_finalization_proc * ofn;
+ GC_PTR * ocd;
+# endif
+{
+ GC_register_finalizer_inner(obj, fn, cd, ofn,
+ ocd, GC_ignore_self_finalize_mark_proc);
+}
+
+# if defined(__STDC__)
+ void GC_register_finalizer_no_order(void * obj,
+ GC_finalization_proc fn, void * cd,
+ GC_finalization_proc *ofn, void ** ocd)
+# else
+ void GC_register_finalizer_no_order(obj, fn, cd, ofn, ocd)
+ GC_PTR obj;
+ GC_finalization_proc fn;
+ GC_PTR cd;
+ GC_finalization_proc * ofn;
+ GC_PTR * ocd;
+# endif
+{
+ GC_register_finalizer_inner(obj, fn, cd, ofn,
+ ocd, GC_null_finalize_mark_proc);
+}
+
+#ifndef NO_DEBUGGING
+void GC_dump_finalization()
+{
+ struct disappearing_link * curr_dl;
+ struct finalizable_object * curr_fo;
+ ptr_t real_ptr, real_link;
+ int dl_size = (log_dl_table_size == -1 ) ? 0 : (1 << log_dl_table_size);
+ int fo_size = (log_fo_table_size == -1 ) ? 0 : (1 << log_fo_table_size);
+ int i;
+
+ GC_printf0("Disappearing links:\n");
+ for (i = 0; i < dl_size; i++) {
+ for (curr_dl = dl_head[i]; curr_dl != 0; curr_dl = dl_next(curr_dl)) {
+ real_ptr = (ptr_t)REVEAL_POINTER(curr_dl -> dl_hidden_obj);
+ real_link = (ptr_t)REVEAL_POINTER(curr_dl -> dl_hidden_link);
+ GC_printf2("Object: 0x%lx, Link:0x%lx\n", real_ptr, real_link);
+ }
+ }
+ GC_printf0("Finalizers:\n");
+ for (i = 0; i < fo_size; i++) {
+ for (curr_fo = fo_head[i]; curr_fo != 0; curr_fo = fo_next(curr_fo)) {
+ real_ptr = (ptr_t)REVEAL_POINTER(curr_fo -> fo_hidden_base);
+ GC_printf1("Finalizable object: 0x%lx\n", real_ptr);
+ }
+ }
+}
+#endif
+
+/* Called with world stopped. Cause disappearing links to disappear, */
+/* and invoke finalizers. */
+void GC_finalize()
+{
+ struct disappearing_link * curr_dl, * prev_dl, * next_dl;
+ struct finalizable_object * curr_fo, * prev_fo, * next_fo;
+ ptr_t real_ptr, real_link;
+ register int i;
+ int dl_size = (log_dl_table_size == -1 ) ? 0 : (1 << log_dl_table_size);
+ int fo_size = (log_fo_table_size == -1 ) ? 0 : (1 << log_fo_table_size);
+
+ /* Make disappearing links disappear */
+ for (i = 0; i < dl_size; i++) {
+ curr_dl = dl_head[i];
+ prev_dl = 0;
+ while (curr_dl != 0) {
+ real_ptr = (ptr_t)REVEAL_POINTER(curr_dl -> dl_hidden_obj);
+ real_link = (ptr_t)REVEAL_POINTER(curr_dl -> dl_hidden_link);
+ if (!GC_is_marked(real_ptr)) {
+ *(word *)real_link = 0;
+ next_dl = dl_next(curr_dl);
+ if (prev_dl == 0) {
+ dl_head[i] = next_dl;
+ } else {
+ dl_set_next(prev_dl, next_dl);
+ }
+ GC_clear_mark_bit((ptr_t)curr_dl);
+ GC_dl_entries--;
+ curr_dl = next_dl;
+ } else {
+ prev_dl = curr_dl;
+ curr_dl = dl_next(curr_dl);
+ }
+ }
+ }
+ /* Mark all objects reachable via chains of 1 or more pointers */
+ /* from finalizable objects. */
+ GC_ASSERT(GC_mark_state == MS_NONE);
+ for (i = 0; i < fo_size; i++) {
+ for (curr_fo = fo_head[i]; curr_fo != 0; curr_fo = fo_next(curr_fo)) {
+ real_ptr = (ptr_t)REVEAL_POINTER(curr_fo -> fo_hidden_base);
+ if (!GC_is_marked(real_ptr)) {
+ GC_MARKED_FOR_FINALIZATION(real_ptr);
+ GC_MARK_FO(real_ptr, curr_fo -> fo_mark_proc);
+ if (GC_is_marked(real_ptr)) {
+ WARN("Finalization cycle involving %lx\n", real_ptr);
+ }
+ }
+ }
+ }
+ /* Enqueue for finalization all objects that are still */
+ /* unreachable. */
+ GC_words_finalized = 0;
+ for (i = 0; i < fo_size; i++) {
+ curr_fo = fo_head[i];
+ prev_fo = 0;
+ while (curr_fo != 0) {
+ real_ptr = (ptr_t)REVEAL_POINTER(curr_fo -> fo_hidden_base);
+ if (!GC_is_marked(real_ptr)) {
+ if (!GC_java_finalization) {
+ GC_set_mark_bit(real_ptr);
+ }
+ /* Delete from hash table */
+ next_fo = fo_next(curr_fo);
+ if (prev_fo == 0) {
+ fo_head[i] = next_fo;
+ } else {
+ fo_set_next(prev_fo, next_fo);
+ }
+ GC_fo_entries--;
+ /* Add to list of objects awaiting finalization. */
+ fo_set_next(curr_fo, GC_finalize_now);
+ GC_finalize_now = curr_fo;
+ /* unhide object pointer so any future collections will */
+ /* see it. */
+ curr_fo -> fo_hidden_base =
+ (word) REVEAL_POINTER(curr_fo -> fo_hidden_base);
+ GC_words_finalized +=
+ ALIGNED_WORDS(curr_fo -> fo_object_size)
+ + ALIGNED_WORDS(sizeof(struct finalizable_object));
+ GC_ASSERT(GC_is_marked(GC_base((ptr_t)curr_fo)));
+ curr_fo = next_fo;
+ } else {
+ prev_fo = curr_fo;
+ curr_fo = fo_next(curr_fo);
+ }
+ }
+ }
+
+ if (GC_java_finalization) {
+ /* make sure we mark everything reachable from objects finalized
+ using the no_order mark_proc */
+ for (curr_fo = GC_finalize_now;
+ curr_fo != NULL; curr_fo = fo_next(curr_fo)) {
+ real_ptr = (ptr_t)curr_fo -> fo_hidden_base;
+ if (!GC_is_marked(real_ptr)) {
+ if (curr_fo -> fo_mark_proc == GC_null_finalize_mark_proc) {
+ GC_MARK_FO(real_ptr, GC_normal_finalize_mark_proc);
+ }
+ GC_set_mark_bit(real_ptr);
+ }
+ }
+ }
+
+ /* Remove dangling disappearing links. */
+ for (i = 0; i < dl_size; i++) {
+ curr_dl = dl_head[i];
+ prev_dl = 0;
+ while (curr_dl != 0) {
+ real_link = GC_base((ptr_t)REVEAL_POINTER(curr_dl -> dl_hidden_link));
+ if (real_link != 0 && !GC_is_marked(real_link)) {
+ next_dl = dl_next(curr_dl);
+ if (prev_dl == 0) {
+ dl_head[i] = next_dl;
+ } else {
+ dl_set_next(prev_dl, next_dl);
+ }
+ GC_clear_mark_bit((ptr_t)curr_dl);
+ GC_dl_entries--;
+ curr_dl = next_dl;
+ } else {
+ prev_dl = curr_dl;
+ curr_dl = dl_next(curr_dl);
+ }
+ }
+ }
+}
+
+#ifndef JAVA_FINALIZATION_NOT_NEEDED
+
+/* Enqueue all remaining finalizers to be run - Assumes lock is
+ * held, and signals are disabled */
+void GC_enqueue_all_finalizers()
+{
+ struct finalizable_object * curr_fo, * prev_fo, * next_fo;
+ ptr_t real_ptr;
+ register int i;
+ int fo_size;
+
+ fo_size = (log_fo_table_size == -1 ) ? 0 : (1 << log_fo_table_size);
+ GC_words_finalized = 0;
+ for (i = 0; i < fo_size; i++) {
+ curr_fo = fo_head[i];
+ prev_fo = 0;
+ while (curr_fo != 0) {
+ real_ptr = (ptr_t)REVEAL_POINTER(curr_fo -> fo_hidden_base);
+ GC_MARK_FO(real_ptr, GC_normal_finalize_mark_proc);
+ GC_set_mark_bit(real_ptr);
+
+ /* Delete from hash table */
+ next_fo = fo_next(curr_fo);
+ if (prev_fo == 0) {
+ fo_head[i] = next_fo;
+ } else {
+ fo_set_next(prev_fo, next_fo);
+ }
+ GC_fo_entries--;
+
+ /* Add to list of objects awaiting finalization. */
+ fo_set_next(curr_fo, GC_finalize_now);
+ GC_finalize_now = curr_fo;
+
+ /* unhide object pointer so any future collections will */
+ /* see it. */
+ curr_fo -> fo_hidden_base =
+ (word) REVEAL_POINTER(curr_fo -> fo_hidden_base);
+
+ GC_words_finalized +=
+ ALIGNED_WORDS(curr_fo -> fo_object_size)
+ + ALIGNED_WORDS(sizeof(struct finalizable_object));
+ curr_fo = next_fo;
+ }
+ }
+
+ return;
+}
+
+/* Invoke all remaining finalizers that haven't yet been run.
+ * This is needed for strict compliance with the Java standard,
+ * which can make the runtime guarantee that all finalizers are run.
+ * Unfortunately, the Java standard implies we have to keep running
+ * finalizers until there are no more left, a potential infinite loop.
+ * YUCK.
+ * Note that this is even more dangerous than the usual Java
+ * finalizers, in that objects reachable from static variables
+ * may have been finalized when these finalizers are run.
+ * Finalizers run at this point must be prepared to deal with a
+ * mostly broken world.
+ * This routine is externally callable, so is called without
+ * the allocation lock.
+ */
+GC_API void GC_finalize_all()
+{
+ DCL_LOCK_STATE;
+
+ DISABLE_SIGNALS();
+ LOCK();
+ while (GC_fo_entries > 0) {
+ GC_enqueue_all_finalizers();
+ UNLOCK();
+ ENABLE_SIGNALS();
+ GC_INVOKE_FINALIZERS();
+ DISABLE_SIGNALS();
+ LOCK();
+ }
+ UNLOCK();
+ ENABLE_SIGNALS();
+}
+#endif
+
+/* Returns true if it is worth calling GC_invoke_finalizers. (Useful if */
+/* finalizers can only be called from some kind of `safe state' and */
+/* getting into that safe state is expensive.) */
+int GC_should_invoke_finalizers GC_PROTO((void))
+{
+ return GC_finalize_now != 0;
+}
+
+/* Invoke finalizers for all objects that are ready to be finalized. */
+/* Should be called without allocation lock. */
+int GC_invoke_finalizers()
+{
+ struct finalizable_object * curr_fo;
+ int count = 0;
+ word mem_freed_before;
+ DCL_LOCK_STATE;
+
+ while (GC_finalize_now != 0) {
+# ifdef THREADS
+ DISABLE_SIGNALS();
+ LOCK();
+# endif
+ if (count == 0) {
+ mem_freed_before = GC_mem_freed;
+ }
+ curr_fo = GC_finalize_now;
+# ifdef THREADS
+ if (curr_fo != 0) GC_finalize_now = fo_next(curr_fo);
+ UNLOCK();
+ ENABLE_SIGNALS();
+ if (curr_fo == 0) break;
+# else
+ GC_finalize_now = fo_next(curr_fo);
+# endif
+ fo_set_next(curr_fo, 0);
+ (*(curr_fo -> fo_fn))((ptr_t)(curr_fo -> fo_hidden_base),
+ curr_fo -> fo_client_data);
+ curr_fo -> fo_client_data = 0;
+ ++count;
+# ifdef UNDEFINED
+ /* This is probably a bad idea. It throws off accounting if */
+ /* nearly all objects are finalizable. O.w. it shouldn't */
+ /* matter. */
+ GC_free((GC_PTR)curr_fo);
+# endif
+ }
+ if (count != 0 && mem_freed_before != GC_mem_freed) {
+ LOCK();
+ GC_finalizer_mem_freed += (GC_mem_freed - mem_freed_before);
+ UNLOCK();
+ }
+ return count;
+}
+
+void (* GC_finalizer_notifier)() = (void (*) GC_PROTO((void)))0;
+
+static GC_word last_finalizer_notification = 0;
+
+void GC_notify_or_invoke_finalizers GC_PROTO((void))
+{
+ /* This is a convenient place to generate backtraces if appropriate, */
+ /* since that code is not callable with the allocation lock. */
+# if defined(KEEP_BACK_PTRS) || defined(MAKE_BACK_GRAPH)
+ static word last_back_trace_gc_no = 1; /* Skip first one. */
+
+ if (GC_gc_no > last_back_trace_gc_no) {
+ word i;
+
+# ifdef KEEP_BACK_PTRS
+ LOCK();
+ /* Stops when GC_gc_no wraps; that's OK. */
+ last_back_trace_gc_no = (word)(-1); /* disable others. */
+ for (i = 0; i < GC_backtraces; ++i) {
+ /* FIXME: This tolerates concurrent heap mutation, */
+ /* which may cause occasional mysterious results. */
+ /* We need to release the GC lock, since GC_print_callers */
+ /* acquires it. It probably shouldn't. */
+ UNLOCK();
+ GC_generate_random_backtrace_no_gc();
+ LOCK();
+ }
+ last_back_trace_gc_no = GC_gc_no;
+ UNLOCK();
+# endif
+# ifdef MAKE_BACK_GRAPH
+ if (GC_print_back_height)
+ GC_print_back_graph_stats();
+# endif
+ }
+# endif
+ if (GC_finalize_now == 0) return;
+ if (!GC_finalize_on_demand) {
+ (void) GC_invoke_finalizers();
+# ifndef THREADS
+ GC_ASSERT(GC_finalize_now == 0);
+# endif /* Otherwise GC can run concurrently and add more */
+ return;
+ }
+ if (GC_finalizer_notifier != (void (*) GC_PROTO((void)))0
+ && last_finalizer_notification != GC_gc_no) {
+ last_finalizer_notification = GC_gc_no;
+ GC_finalizer_notifier();
+ }
+}
+
+# ifdef __STDC__
+ GC_PTR GC_call_with_alloc_lock(GC_fn_type fn,
+ GC_PTR client_data)
+# else
+ GC_PTR GC_call_with_alloc_lock(fn, client_data)
+ GC_fn_type fn;
+ GC_PTR client_data;
+# endif
+{
+ GC_PTR result;
+ DCL_LOCK_STATE;
+
+# ifdef THREADS
+ DISABLE_SIGNALS();
+ LOCK();
+ SET_LOCK_HOLDER();
+# endif
+ result = (*fn)(client_data);
+# ifdef THREADS
+# ifndef GC_ASSERTIONS
+ UNSET_LOCK_HOLDER();
+# endif /* o.w. UNLOCK() does it implicitly */
+ UNLOCK();
+ ENABLE_SIGNALS();
+# endif
+ return(result);
+}
+
+#if !defined(NO_DEBUGGING)
+
+void GC_print_finalization_stats()
+{
+ struct finalizable_object *fo = GC_finalize_now;
+ size_t ready = 0;
+
+ GC_printf2("%lu finalization table entries; %lu disappearing links\n",
+ GC_fo_entries, GC_dl_entries);
+ for (; 0 != fo; fo = fo_next(fo)) ++ready;
+ GC_printf1("%lu objects are eligible for immediate finalization\n", ready);
+}
+
+#endif /* NO_DEBUGGING */
Added: llvm-gcc-4.2/trunk/boehm-gc/gc.mak
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/gc.mak?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/gc.mak (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/gc.mak Thu Nov 8 16:56:19 2007
@@ -0,0 +1,2158 @@
+# Microsoft Developer Studio Generated NMAKE File, Format Version 4.10
+# This has been hand-edited way too many times.
+# A clean, manually generated makefile would be an improvement.
+
+# TARGTYPE "Win32 (x86) Application" 0x0101
+# TARGTYPE "Win32 (x86) Dynamic-Link Library" 0x0102
+
+!IF "$(CFG)" == ""
+CFG=gctest - Win32 Release
+!MESSAGE No configuration specified. Defaulting to cord - Win32 Debug.
+!ENDIF
+
+!IF "$(CFG)" != "gc - Win32 Release" && "$(CFG)" != "gc - Win32 Debug" &&\
+ "$(CFG)" != "gctest - Win32 Release" && "$(CFG)" != "gctest - Win32 Debug" &&\
+ "$(CFG)" != "cord - Win32 Release" && "$(CFG)" != "cord - Win32 Debug"
+!MESSAGE Invalid configuration "$(CFG)" specified.
+!MESSAGE You can specify a configuration when running NMAKE on this makefile
+!MESSAGE by defining the macro CFG on the command line. For example:
+!MESSAGE
+!MESSAGE NMAKE /f "gc.mak" CFG="cord - Win32 Debug"
+!MESSAGE
+!MESSAGE Possible choices for configuration are:
+!MESSAGE
+!MESSAGE "gc - Win32 Release" (based on "Win32 (x86) Dynamic-Link Library")
+!MESSAGE "gc - Win32 Debug" (based on "Win32 (x86) Dynamic-Link Library")
+!MESSAGE "gctest - Win32 Release" (based on "Win32 (x86) Application")
+!MESSAGE "gctest - Win32 Debug" (based on "Win32 (x86) Application")
+!MESSAGE "cord - Win32 Release" (based on "Win32 (x86) Application")
+!MESSAGE "cord - Win32 Debug" (based on "Win32 (x86) Application")
+!MESSAGE
+!ERROR An invalid configuration is specified.
+!ENDIF
+
+!IF "$(OS)" == "Windows_NT"
+NULL=
+!ELSE
+NULL=nul
+!ENDIF
+################################################################################
+# Begin Project
+# PROP Target_Last_Scanned "gctest - Win32 Debug"
+
+!IF "$(CFG)" == "gc - Win32 Release"
+
+# PROP BASE Use_MFC 0
+# PROP BASE Use_Debug_Libraries 0
+# PROP BASE Output_Dir "Release"
+# PROP BASE Intermediate_Dir "Release"
+# PROP BASE Target_Dir ""
+# PROP Use_MFC 0
+# PROP Use_Debug_Libraries 0
+# PROP Output_Dir "Release"
+# PROP Intermediate_Dir "Release"
+# PROP Target_Dir ""
+OUTDIR=.\Release
+INTDIR=.\Release
+
+ALL : ".\Release\gc.dll" ".\Release\gc.bsc"
+
+CLEAN :
+ - at erase ".\Release\allchblk.obj"
+ - at erase ".\Release\allchblk.sbr"
+ - at erase ".\Release\alloc.obj"
+ - at erase ".\Release\alloc.sbr"
+ - at erase ".\Release\blacklst.obj"
+ - at erase ".\Release\blacklst.sbr"
+ - at erase ".\Release\checksums.obj"
+ - at erase ".\Release\checksums.sbr"
+ - at erase ".\Release\dbg_mlc.obj"
+ - at erase ".\Release\dbg_mlc.sbr"
+ - at erase ".\Release\dyn_load.obj"
+ - at erase ".\Release\dyn_load.sbr"
+ - at erase ".\Release\finalize.obj"
+ - at erase ".\Release\finalize.sbr"
+ - at erase ".\Release\gc.bsc"
+ - at erase ".\Release\gc_cpp.obj"
+ - at erase ".\Release\gc_cpp.sbr"
+ - at erase ".\Release\gc.dll"
+ - at erase ".\Release\gc.exp"
+ - at erase ".\Release\gc.lib"
+ - at erase ".\Release\headers.obj"
+ - at erase ".\Release\headers.sbr"
+ - at erase ".\Release\mach_dep.obj"
+ - at erase ".\Release\mach_dep.sbr"
+ - at erase ".\Release\malloc.obj"
+ - at erase ".\Release\malloc.sbr"
+ - at erase ".\Release\mallocx.obj"
+ - at erase ".\Release\mallocx.sbr"
+ - at erase ".\Release\mark.obj"
+ - at erase ".\Release\mark.sbr"
+ - at erase ".\Release\mark_rts.obj"
+ - at erase ".\Release\mark_rts.sbr"
+ - at erase ".\Release\misc.obj"
+ - at erase ".\Release\misc.sbr"
+ - at erase ".\Release\new_hblk.obj"
+ - at erase ".\Release\new_hblk.sbr"
+ - at erase ".\Release\obj_map.obj"
+ - at erase ".\Release\obj_map.sbr"
+ - at erase ".\Release\os_dep.obj"
+ - at erase ".\Release\os_dep.sbr"
+ - at erase ".\Release\ptr_chck.obj"
+ - at erase ".\Release\ptr_chck.sbr"
+ - at erase ".\Release\reclaim.obj"
+ - at erase ".\Release\reclaim.sbr"
+ - at erase ".\Release\stubborn.obj"
+ - at erase ".\Release\stubborn.sbr"
+ - at erase ".\Release\typd_mlc.obj"
+ - at erase ".\Release\typd_mlc.sbr"
+ - at erase ".\Release\win32_threads.obj"
+ - at erase ".\Release\win32_threads.sbr"
+
+"$(OUTDIR)" :
+ if not exist "$(OUTDIR)/$(NULL)" mkdir "$(OUTDIR)"
+
+CPP=cl.exe
+# ADD BASE CPP /nologo /MT /W3 /GX /O2 /D "WIN32" /D "NDEBUG" /D "_WINDOWS" /YX /c
+# ADD CPP /nologo /MD /W3 /GX /O2 /I include /D "NDEBUG" /D "SILENT" /D "GC_BUILD" /D "WIN32" /D "_WINDOWS" /D "ALL_INTERIOR_POINTERS" /D "__STDC__" /D "GC_WIN32_THREADS" /FR /YX /c
+CPP_PROJ=/nologo /MD /W3 /GX /O2 /I include /D "NDEBUG" /D "SILENT" /D "GC_BUILD" /D\
+ "WIN32" /D "_WINDOWS" /D "ALL_INTERIOR_POINTERS" /D "__STDC__" /D\
+ "GC_WIN32_THREADS" /FR"$(INTDIR)/" /Fp"$(INTDIR)/gc.pch" /YX /Fo"$(INTDIR)/" /c
+CPP_OBJS=.\Release/
+CPP_SBRS=.\Release/
+
+.c{$(CPP_OBJS)}.obj:
+ $(CPP) $(CPP_PROJ) $<
+
+.cpp{$(CPP_OBJS)}.obj:
+ $(CPP) $(CPP_PROJ) $<
+
+.cxx{$(CPP_OBJS)}.obj:
+ $(CPP) $(CPP_PROJ) $<
+
+.c{$(CPP_SBRS)}.sbr:
+ $(CPP) $(CPP_PROJ) $<
+
+.cpp{$(CPP_SBRS)}.sbr:
+ $(CPP) $(CPP_PROJ) $<
+
+.cxx{$(CPP_SBRS)}.sbr:
+ $(CPP) $(CPP_PROJ) $<
+
+MTL=mktyplib.exe
+# ADD BASE MTL /nologo /D "NDEBUG" /win32
+# ADD MTL /nologo /D "NDEBUG" /win32
+MTL_PROJ=/nologo /D "NDEBUG" /win32
+RSC=rc.exe
+# ADD BASE RSC /l 0x809 /d "NDEBUG"
+# ADD RSC /l 0x809 /d "NDEBUG"
+BSC32=bscmake.exe
+# ADD BASE BSC32 /nologo
+# ADD BSC32 /nologo
+BSC32_FLAGS=/nologo /o"$(OUTDIR)/gc.bsc"
+BSC32_SBRS= \
+ ".\Release\allchblk.sbr" \
+ ".\Release\alloc.sbr" \
+ ".\Release\blacklst.sbr" \
+ ".\Release\checksums.sbr" \
+ ".\Release\dbg_mlc.sbr" \
+ ".\Release\dyn_load.sbr" \
+ ".\Release\finalize.sbr" \
+ ".\Release\gc_cpp.sbr" \
+ ".\Release\headers.sbr" \
+ ".\Release\mach_dep.sbr" \
+ ".\Release\malloc.sbr" \
+ ".\Release\mallocx.sbr" \
+ ".\Release\mark.sbr" \
+ ".\Release\mark_rts.sbr" \
+ ".\Release\misc.sbr" \
+ ".\Release\new_hblk.sbr" \
+ ".\Release\obj_map.sbr" \
+ ".\Release\os_dep.sbr" \
+ ".\Release\ptr_chck.sbr" \
+ ".\Release\reclaim.sbr" \
+ ".\Release\stubborn.sbr" \
+ ".\Release\typd_mlc.sbr" \
+ ".\Release\win32_threads.sbr"
+
+".\Release\gc.bsc" : "$(OUTDIR)" $(BSC32_SBRS)
+ $(BSC32) @<<
+ $(BSC32_FLAGS) $(BSC32_SBRS)
+<<
+
+LINK32=link.exe
+# ADD BASE LINK32 kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib /nologo /subsystem:windows /dll /machine:I386
+# ADD LINK32 kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib /nologo /subsystem:windows /dll /machine:I386
+LINK32_FLAGS=kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib\
+ advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib\
+ odbccp32.lib /nologo /subsystem:windows /dll /incremental:no\
+ /pdb:"$(OUTDIR)/gc.pdb" /machine:I386 /out:"$(OUTDIR)/gc.dll"\
+ /implib:"$(OUTDIR)/gc.lib"
+LINK32_OBJS= \
+ ".\Release\allchblk.obj" \
+ ".\Release\alloc.obj" \
+ ".\Release\blacklst.obj" \
+ ".\Release\checksums.obj" \
+ ".\Release\dbg_mlc.obj" \
+ ".\Release\dyn_load.obj" \
+ ".\Release\finalize.obj" \
+ ".\Release\gc_cpp.obj" \
+ ".\Release\headers.obj" \
+ ".\Release\mach_dep.obj" \
+ ".\Release\malloc.obj" \
+ ".\Release\mallocx.obj" \
+ ".\Release\mark.obj" \
+ ".\Release\mark_rts.obj" \
+ ".\Release\misc.obj" \
+ ".\Release\new_hblk.obj" \
+ ".\Release\obj_map.obj" \
+ ".\Release\os_dep.obj" \
+ ".\Release\ptr_chck.obj" \
+ ".\Release\reclaim.obj" \
+ ".\Release\stubborn.obj" \
+ ".\Release\typd_mlc.obj" \
+ ".\Release\win32_threads.obj"
+
+".\Release\gc.dll" : "$(OUTDIR)" $(DEF_FILE) $(LINK32_OBJS)
+ $(LINK32) @<<
+ $(LINK32_FLAGS) $(LINK32_OBJS)
+<<
+
+!ELSEIF "$(CFG)" == "gc - Win32 Debug"
+
+# PROP BASE Use_MFC 0
+# PROP BASE Use_Debug_Libraries 1
+# PROP BASE Output_Dir "Debug"
+# PROP BASE Intermediate_Dir "Debug"
+# PROP BASE Target_Dir ""
+# PROP Use_MFC 0
+# PROP Use_Debug_Libraries 1
+# PROP Output_Dir "Debug"
+# PROP Intermediate_Dir "Debug"
+# PROP Target_Dir ""
+OUTDIR=.\Debug
+INTDIR=.\Debug
+
+ALL : ".\Debug\gc.dll" ".\Debug\gc.bsc"
+
+CLEAN :
+ - at erase ".\Debug\allchblk.obj"
+ - at erase ".\Debug\allchblk.sbr"
+ - at erase ".\Debug\alloc.obj"
+ - at erase ".\Debug\alloc.sbr"
+ - at erase ".\Debug\blacklst.obj"
+ - at erase ".\Debug\blacklst.sbr"
+ - at erase ".\Debug\checksums.obj"
+ - at erase ".\Debug\checksums.sbr"
+ - at erase ".\Debug\dbg_mlc.obj"
+ - at erase ".\Debug\dbg_mlc.sbr"
+ - at erase ".\Debug\dyn_load.obj"
+ - at erase ".\Debug\dyn_load.sbr"
+ - at erase ".\Debug\finalize.obj"
+ - at erase ".\Debug\finalize.sbr"
+ - at erase ".\Debug\gc_cpp.obj"
+ - at erase ".\Debug\gc_cpp.sbr"
+ - at erase ".\Debug\gc.bsc"
+ - at erase ".\Debug\gc.dll"
+ - at erase ".\Debug\gc.exp"
+ - at erase ".\Debug\gc.lib"
+ - at erase ".\Debug\gc.map"
+ - at erase ".\Debug\gc.pdb"
+ - at erase ".\Debug\headers.obj"
+ - at erase ".\Debug\headers.sbr"
+ - at erase ".\Debug\mach_dep.obj"
+ - at erase ".\Debug\mach_dep.sbr"
+ - at erase ".\Debug\malloc.obj"
+ - at erase ".\Debug\malloc.sbr"
+ - at erase ".\Debug\mallocx.obj"
+ - at erase ".\Debug\mallocx.sbr"
+ - at erase ".\Debug\mark.obj"
+ - at erase ".\Debug\mark.sbr"
+ - at erase ".\Debug\mark_rts.obj"
+ - at erase ".\Debug\mark_rts.sbr"
+ - at erase ".\Debug\misc.obj"
+ - at erase ".\Debug\misc.sbr"
+ - at erase ".\Debug\new_hblk.obj"
+ - at erase ".\Debug\new_hblk.sbr"
+ - at erase ".\Debug\obj_map.obj"
+ - at erase ".\Debug\obj_map.sbr"
+ - at erase ".\Debug\os_dep.obj"
+ - at erase ".\Debug\os_dep.sbr"
+ - at erase ".\Debug\ptr_chck.obj"
+ - at erase ".\Debug\ptr_chck.sbr"
+ - at erase ".\Debug\reclaim.obj"
+ - at erase ".\Debug\reclaim.sbr"
+ - at erase ".\Debug\stubborn.obj"
+ - at erase ".\Debug\stubborn.sbr"
+ - at erase ".\Debug\typd_mlc.obj"
+ - at erase ".\Debug\typd_mlc.sbr"
+ - at erase ".\Debug\vc40.idb"
+ - at erase ".\Debug\vc40.pdb"
+ - at erase ".\Debug\win32_threads.obj"
+ - at erase ".\Debug\win32_threads.sbr"
+
+"$(OUTDIR)" :
+ if not exist "$(OUTDIR)/$(NULL)" mkdir "$(OUTDIR)"
+
+CPP=cl.exe
+# ADD BASE CPP /nologo /MTd /W3 /Gm /GX /Zi /Od /D "WIN32" /D "_DEBUG" /D "_WINDOWS" /YX /c
+# ADD CPP /nologo /MDd /W3 /Gm /GX /Zi /Od /I include /D "_DEBUG" /D "SILENT" /D "GC_BUILD" /D "WIN32" /D "_WINDOWS" /D "ALL_INTERIOR_POINTERS" /D "__STDC__" /D "GC_WIN32_THREADS" /FR /YX /c
+CPP_PROJ=/nologo /MDd /W3 /Gm /GX /Zi /Od /I include /D "_DEBUG" /D "SILENT" /D "GC_BUILD"\
+ /D "WIN32" /D "_WINDOWS" /D "ALL_INTERIOR_POINTERS" /D "__STDC__" /D\
+ "GC_WIN32_THREADS" /FR"$(INTDIR)/" /Fp"$(INTDIR)/gc.pch" /YX /Fo"$(INTDIR)/"\
+ /Fd"$(INTDIR)/" /c
+CPP_OBJS=.\Debug/
+CPP_SBRS=.\Debug/
+
+.c{$(CPP_OBJS)}.obj:
+ $(CPP) $(CPP_PROJ) $<
+
+.cpp{$(CPP_OBJS)}.obj:
+ $(CPP) $(CPP_PROJ) $<
+
+.cxx{$(CPP_OBJS)}.obj:
+ $(CPP) $(CPP_PROJ) $<
+
+.c{$(CPP_SBRS)}.sbr:
+ $(CPP) $(CPP_PROJ) $<
+
+.cpp{$(CPP_SBRS)}.sbr:
+ $(CPP) $(CPP_PROJ) $<
+
+.cxx{$(CPP_SBRS)}.sbr:
+ $(CPP) $(CPP_PROJ) $<
+
+MTL=mktyplib.exe
+# ADD BASE MTL /nologo /D "_DEBUG" /win32
+# ADD MTL /nologo /D "_DEBUG" /win32
+MTL_PROJ=/nologo /D "_DEBUG" /win32
+RSC=rc.exe
+# ADD BASE RSC /l 0x809 /d "_DEBUG"
+# ADD RSC /l 0x809 /d "_DEBUG"
+BSC32=bscmake.exe
+# ADD BASE BSC32 /nologo
+# ADD BSC32 /nologo
+BSC32_FLAGS=/nologo /o"$(OUTDIR)/gc.bsc"
+BSC32_SBRS= \
+ ".\Debug\allchblk.sbr" \
+ ".\Debug\alloc.sbr" \
+ ".\Debug\blacklst.sbr" \
+ ".\Debug\checksums.sbr" \
+ ".\Debug\dbg_mlc.sbr" \
+ ".\Debug\dyn_load.sbr" \
+ ".\Debug\finalize.sbr" \
+ ".\Debug\gc_cpp.sbr" \
+ ".\Debug\headers.sbr" \
+ ".\Debug\mach_dep.sbr" \
+ ".\Debug\malloc.sbr" \
+ ".\Debug\mallocx.sbr" \
+ ".\Debug\mark.sbr" \
+ ".\Debug\mark_rts.sbr" \
+ ".\Debug\misc.sbr" \
+ ".\Debug\new_hblk.sbr" \
+ ".\Debug\obj_map.sbr" \
+ ".\Debug\os_dep.sbr" \
+ ".\Debug\ptr_chck.sbr" \
+ ".\Debug\reclaim.sbr" \
+ ".\Debug\stubborn.sbr" \
+ ".\Debug\typd_mlc.sbr" \
+ ".\Debug\win32_threads.sbr"
+
+".\Debug\gc.bsc" : "$(OUTDIR)" $(BSC32_SBRS)
+ $(BSC32) @<<
+ $(BSC32_FLAGS) $(BSC32_SBRS)
+<<
+
+LINK32=link.exe
+# ADD BASE LINK32 kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib /nologo /subsystem:windows /dll /debug /machine:I386
+# ADD LINK32 kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib /nologo /subsystem:windows /dll /incremental:no /map /debug /machine:I386
+LINK32_FLAGS=kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib\
+ advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib\
+ odbccp32.lib /nologo /subsystem:windows /dll /incremental:no\
+ /pdb:"$(OUTDIR)/gc.pdb" /map:"$(INTDIR)/gc.map" /debug /machine:I386\
+ /out:"$(OUTDIR)/gc.dll" /implib:"$(OUTDIR)/gc.lib"
+LINK32_OBJS= \
+ ".\Debug\allchblk.obj" \
+ ".\Debug\alloc.obj" \
+ ".\Debug\blacklst.obj" \
+ ".\Debug\checksums.obj" \
+ ".\Debug\dbg_mlc.obj" \
+ ".\Debug\dyn_load.obj" \
+ ".\Debug\finalize.obj" \
+ ".\Debug\gc_cpp.obj" \
+ ".\Debug\headers.obj" \
+ ".\Debug\mach_dep.obj" \
+ ".\Debug\malloc.obj" \
+ ".\Debug\mallocx.obj" \
+ ".\Debug\mark.obj" \
+ ".\Debug\mark_rts.obj" \
+ ".\Debug\misc.obj" \
+ ".\Debug\new_hblk.obj" \
+ ".\Debug\obj_map.obj" \
+ ".\Debug\os_dep.obj" \
+ ".\Debug\ptr_chck.obj" \
+ ".\Debug\reclaim.obj" \
+ ".\Debug\stubborn.obj" \
+ ".\Debug\typd_mlc.obj" \
+ ".\Debug\win32_threads.obj"
+
+".\Debug\gc.dll" : "$(OUTDIR)" $(DEF_FILE) $(LINK32_OBJS)
+ $(LINK32) @<<
+ $(LINK32_FLAGS) $(LINK32_OBJS)
+<<
+
+!ELSEIF "$(CFG)" == "gctest - Win32 Release"
+
+# PROP BASE Use_MFC 0
+# PROP BASE Use_Debug_Libraries 0
+# PROP BASE Output_Dir "gctest\Release"
+# PROP BASE Intermediate_Dir "gctest\Release"
+# PROP BASE Target_Dir "gctest"
+# PROP Use_MFC 0
+# PROP Use_Debug_Libraries 0
+# PROP Output_Dir "gctest\Release"
+# PROP Intermediate_Dir "gctest\Release"
+# PROP Target_Dir "gctest"
+OUTDIR=.\gctest\Release
+INTDIR=.\gctest\Release
+
+ALL : "gc - Win32 Release" ".\Release\gctest.exe"
+
+CLEAN :
+ - at erase ".\gctest\Release\test.obj"
+ - at erase ".\Release\gctest.exe"
+
+"$(OUTDIR)" :
+ if not exist "$(OUTDIR)/$(NULL)" mkdir "$(OUTDIR)"
+
+test.c : tests\test.c
+ copy tests\test.c test.c
+
+CPP=cl.exe
+# ADD BASE CPP /nologo /W3 /GX /O2 /D "WIN32" /D "NDEBUG" /D "_WINDOWS" /YX /c
+# ADD CPP /nologo /MD /W3 /GX /O2 /I include /D "NDEBUG" /D "WIN32" /D "_WINDOWS" /D "ALL_INTERIOR_POINTERS" /D "__STDC__" /D "GC_WIN32_THREADS" /YX /c
+CPP_PROJ=/nologo /MD /W3 /GX /O2 /I include /D "NDEBUG" /D "WIN32" /D "_WINDOWS" /D\
+ "ALL_INTERIOR_POINTERS" /D "__STDC__" /D "GC_WIN32_THREADS"\
+ /Fp"$(INTDIR)/gctest.pch" /YX /Fo"$(INTDIR)/" /c
+CPP_OBJS=.\gctest\Release/
+CPP_SBRS=.\.
+
+.c{$(CPP_OBJS)}.obj:
+ $(CPP) $(CPP_PROJ) $<
+
+.cpp{$(CPP_OBJS)}.obj:
+ $(CPP) $(CPP_PROJ) $<
+
+.cxx{$(CPP_OBJS)}.obj:
+ $(CPP) $(CPP_PROJ) $<
+
+.c{$(CPP_SBRS)}.sbr:
+ $(CPP) $(CPP_PROJ) $<
+
+.cpp{$(CPP_SBRS)}.sbr:
+ $(CPP) $(CPP_PROJ) $<
+
+.cxx{$(CPP_SBRS)}.sbr:
+ $(CPP) $(CPP_PROJ) $<
+
+MTL=mktyplib.exe
+# ADD BASE MTL /nologo /D "NDEBUG" /win32
+# ADD MTL /nologo /D "NDEBUG" /win32
+MTL_PROJ=/nologo /D "NDEBUG" /win32
+RSC=rc.exe
+# ADD BASE RSC /l 0x809 /d "NDEBUG"
+# ADD RSC /l 0x809 /d "NDEBUG"
+BSC32=bscmake.exe
+# ADD BASE BSC32 /nologo
+# ADD BSC32 /nologo
+BSC32_FLAGS=/nologo /o"$(OUTDIR)/gctest.bsc"
+BSC32_SBRS= \
+
+LINK32=link.exe
+# ADD BASE LINK32 kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib /nologo /subsystem:windows /machine:I386
+# ADD LINK32 kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib /nologo /subsystem:windows /machine:I386 /out:"Release/gctest.exe"
+LINK32_FLAGS=kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib\
+ advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib\
+ odbccp32.lib /nologo /subsystem:windows /incremental:no\
+ /pdb:"$(OUTDIR)/gctest.pdb" /machine:I386 /out:"Release/gctest.exe"
+LINK32_OBJS= \
+ ".\gctest\Release\test.obj" \
+ ".\Release\gc.lib"
+
+".\Release\gctest.exe" : "$(OUTDIR)" $(DEF_FILE) $(LINK32_OBJS)
+ $(LINK32) @<<
+ $(LINK32_FLAGS) $(LINK32_OBJS)
+<<
+
+!ELSEIF "$(CFG)" == "gctest - Win32 Debug"
+
+# PROP BASE Use_MFC 0
+# PROP BASE Use_Debug_Libraries 1
+# PROP BASE Output_Dir "gctest\Debug"
+# PROP BASE Intermediate_Dir "gctest\Debug"
+# PROP BASE Target_Dir "gctest"
+# PROP Use_MFC 0
+# PROP Use_Debug_Libraries 1
+# PROP Output_Dir "gctest\Debug"
+# PROP Intermediate_Dir "gctest\Debug"
+# PROP Target_Dir "gctest"
+OUTDIR=.\gctest\Debug
+INTDIR=.\gctest\Debug
+
+ALL : "gc - Win32 Debug" ".\Debug\gctest.exe" ".\gctest\Debug\gctest.bsc"
+
+CLEAN :
+ - at erase ".\Debug\gctest.exe"
+ - at erase ".\gctest\Debug\gctest.bsc"
+ - at erase ".\gctest\Debug\gctest.map"
+ - at erase ".\gctest\Debug\gctest.pdb"
+ - at erase ".\gctest\Debug\test.obj"
+ - at erase ".\gctest\Debug\test.sbr"
+ - at erase ".\gctest\Debug\vc40.idb"
+ - at erase ".\gctest\Debug\vc40.pdb"
+
+"$(OUTDIR)" :
+ if not exist "$(OUTDIR)/$(NULL)" mkdir "$(OUTDIR)"
+
+CPP=cl.exe
+# ADD BASE CPP /nologo /W3 /Gm /GX /Zi /Od /D "WIN32" /D "_DEBUG" /D "_WINDOWS" /YX /c
+# ADD CPP /nologo /MDd /W3 /Gm /GX /Zi /Od /D "_DEBUG" /D "WIN32" /D "_WINDOWS" /D "ALL_INTERIOR_POINTERS" /D "__STDC__" /D "GC_WIN32_THREADS" /FR /YX /c
+CPP_PROJ=/nologo /MDd /W3 /Gm /GX /Zi /Od /I include /D "_DEBUG" /D "WIN32" /D "_WINDOWS"\
+ /D "ALL_INTERIOR_POINTERS" /D "__STDC__" /D "GC_WIN32_THREADS" /FR"$(INTDIR)/"\
+ /Fp"$(INTDIR)/gctest.pch" /YX /Fo"$(INTDIR)/" /Fd"$(INTDIR)/" /c
+CPP_OBJS=.\gctest\Debug/
+CPP_SBRS=.\gctest\Debug/
+
+.c{$(CPP_OBJS)}.obj:
+ $(CPP) $(CPP_PROJ) $<
+
+.cpp{$(CPP_OBJS)}.obj:
+ $(CPP) $(CPP_PROJ) $<
+
+.cxx{$(CPP_OBJS)}.obj:
+ $(CPP) $(CPP_PROJ) $<
+
+.c{$(CPP_SBRS)}.sbr:
+ $(CPP) $(CPP_PROJ) $<
+
+.cpp{$(CPP_SBRS)}.sbr:
+ $(CPP) $(CPP_PROJ) $<
+
+.cxx{$(CPP_SBRS)}.sbr:
+ $(CPP) $(CPP_PROJ) $<
+
+MTL=mktyplib.exe
+# ADD BASE MTL /nologo /D "_DEBUG" /win32
+# ADD MTL /nologo /D "_DEBUG" /win32
+MTL_PROJ=/nologo /D "_DEBUG" /win32
+RSC=rc.exe
+# ADD BASE RSC /l 0x809 /d "_DEBUG"
+# ADD RSC /l 0x809 /d "_DEBUG"
+BSC32=bscmake.exe
+# ADD BASE BSC32 /nologo
+# ADD BSC32 /nologo
+BSC32_FLAGS=/nologo /o"$(OUTDIR)/gctest.bsc"
+BSC32_SBRS= \
+ ".\gctest\Debug\test.sbr"
+
+".\gctest\Debug\gctest.bsc" : "$(OUTDIR)" $(BSC32_SBRS)
+ $(BSC32) @<<
+ $(BSC32_FLAGS) $(BSC32_SBRS)
+<<
+
+LINK32=link.exe
+# ADD BASE LINK32 kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib /nologo /subsystem:windows /debug /machine:I386
+# ADD LINK32 kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib /nologo /subsystem:windows /incremental:no /map /debug /machine:I386 /out:"Debug/gctest.exe"
+LINK32_FLAGS=kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib\
+ advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib\
+ odbccp32.lib /nologo /subsystem:windows /incremental:no\
+ /pdb:"$(OUTDIR)/gctest.pdb" /map:"$(INTDIR)/gctest.map" /debug /machine:I386\
+ /out:"Debug/gctest.exe"
+LINK32_OBJS= \
+ ".\Debug\gc.lib" \
+ ".\gctest\Debug\test.obj"
+
+".\Debug\gctest.exe" : "$(OUTDIR)" $(DEF_FILE) $(LINK32_OBJS)
+ $(LINK32) @<<
+ $(LINK32_FLAGS) $(LINK32_OBJS)
+<<
+
+!ELSEIF "$(CFG)" == "cord - Win32 Release"
+
+# PROP BASE Use_MFC 0
+# PROP BASE Use_Debug_Libraries 0
+# PROP BASE Output_Dir "cord\Release"
+# PROP BASE Intermediate_Dir "cord\Release"
+# PROP BASE Target_Dir "cord"
+# PROP Use_MFC 0
+# PROP Use_Debug_Libraries 0
+# PROP Output_Dir "cord\Release"
+# PROP Intermediate_Dir "cord\Release"
+# PROP Target_Dir "cord"
+OUTDIR=.\cord\Release
+INTDIR=.\cord\Release
+
+ALL : "gc - Win32 Release" ".\Release\de.exe"
+
+CLEAN :
+ - at erase ".\cord\Release\cordbscs.obj"
+ - at erase ".\cord\Release\cordxtra.obj"
+ - at erase ".\cord\Release\de.obj"
+ - at erase ".\cord\Release\de_win.obj"
+ - at erase ".\cord\Release\de_win.res"
+ - at erase ".\Release\de.exe"
+
+"$(OUTDIR)" :
+ if not exist "$(OUTDIR)/$(NULL)" mkdir "$(OUTDIR)"
+
+CPP=cl.exe
+# ADD BASE CPP /nologo /W3 /GX /O2 /D "WIN32" /D "NDEBUG" /D "_WINDOWS" /YX /c
+# ADD CPP /nologo /MD /W3 /GX /O2 /I "." /D "NDEBUG" /D "WIN32" /D "_WINDOWS" /D "ALL_INTERIOR_POINTERS" /YX /c
+CPP_PROJ=/nologo /MD /W3 /GX /O2 /I "." /I include /D "NDEBUG" /D "WIN32" /D "_WINDOWS" /D\
+ "ALL_INTERIOR_POINTERS" /Fp"$(INTDIR)/cord.pch" /YX /Fo"$(INTDIR)/" /c
+CPP_OBJS=.\cord\Release/
+CPP_SBRS=.\.
+
+.c{$(CPP_OBJS)}.obj:
+ $(CPP) $(CPP_PROJ) $<
+
+.cpp{$(CPP_OBJS)}.obj:
+ $(CPP) $(CPP_PROJ) $<
+
+.cxx{$(CPP_OBJS)}.obj:
+ $(CPP) $(CPP_PROJ) $<
+
+.c{$(CPP_SBRS)}.sbr:
+ $(CPP) $(CPP_PROJ) $<
+
+.cpp{$(CPP_SBRS)}.sbr:
+ $(CPP) $(CPP_PROJ) $<
+
+.cxx{$(CPP_SBRS)}.sbr:
+ $(CPP) $(CPP_PROJ) $<
+
+MTL=mktyplib.exe
+# ADD BASE MTL /nologo /D "NDEBUG" /win32
+# ADD MTL /nologo /D "NDEBUG" /win32
+MTL_PROJ=/nologo /D "NDEBUG" /win32
+RSC=rc.exe
+# ADD BASE RSC /l 0x809 /d "NDEBUG"
+# ADD RSC /l 0x809 /d "NDEBUG"
+RSC_PROJ=/l 0x809 /fo"$(INTDIR)/de_win.res" /d "NDEBUG"
+BSC32=bscmake.exe
+# ADD BASE BSC32 /nologo
+# ADD BSC32 /nologo
+BSC32_FLAGS=/nologo /o"$(OUTDIR)/cord.bsc"
+BSC32_SBRS= \
+
+LINK32=link.exe
+# ADD BASE LINK32 kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib /nologo /subsystem:windows /machine:I386
+# ADD LINK32 kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib /nologo /subsystem:windows /machine:I386 /out:"Release/de.exe"
+LINK32_FLAGS=kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib\
+ advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib\
+ odbccp32.lib /nologo /subsystem:windows /incremental:no /pdb:"$(OUTDIR)/de.pdb"\
+ /machine:I386 /out:"Release/de.exe"
+LINK32_OBJS= \
+ ".\cord\Release\cordbscs.obj" \
+ ".\cord\Release\cordxtra.obj" \
+ ".\cord\Release\de.obj" \
+ ".\cord\Release\de_win.obj" \
+ ".\cord\Release\de_win.res" \
+ ".\Release\gc.lib"
+
+".\Release\de.exe" : "$(OUTDIR)" $(DEF_FILE) $(LINK32_OBJS)
+ $(LINK32) @<<
+ $(LINK32_FLAGS) $(LINK32_OBJS)
+<<
+
+!ELSEIF "$(CFG)" == "cord - Win32 Debug"
+
+# PROP BASE Use_MFC 0
+# PROP BASE Use_Debug_Libraries 1
+# PROP BASE Output_Dir "cord\Debug"
+# PROP BASE Intermediate_Dir "cord\Debug"
+# PROP BASE Target_Dir "cord"
+# PROP Use_MFC 0
+# PROP Use_Debug_Libraries 1
+# PROP Output_Dir "cord\Debug"
+# PROP Intermediate_Dir "cord\Debug"
+# PROP Target_Dir "cord"
+OUTDIR=.\cord\Debug
+INTDIR=.\cord\Debug
+
+ALL : "gc - Win32 Debug" ".\Debug\de.exe"
+
+CLEAN :
+ - at erase ".\cord\Debug\cordbscs.obj"
+ - at erase ".\cord\Debug\cordxtra.obj"
+ - at erase ".\cord\Debug\de.obj"
+ - at erase ".\cord\Debug\de.pdb"
+ - at erase ".\cord\Debug\de_win.obj"
+ - at erase ".\cord\Debug\de_win.res"
+ - at erase ".\cord\Debug\vc40.idb"
+ - at erase ".\cord\Debug\vc40.pdb"
+ - at erase ".\Debug\de.exe"
+ - at erase ".\Debug\de.ilk"
+
+"$(OUTDIR)" :
+ if not exist "$(OUTDIR)/$(NULL)" mkdir "$(OUTDIR)"
+
+CPP=cl.exe
+# ADD BASE CPP /nologo /W3 /Gm /GX /Zi /Od /D "WIN32" /D "_DEBUG" /D "_WINDOWS" /YX /c
+# ADD CPP /nologo /MDd /W3 /Gm /GX /Zi /Od /I "." /D "_DEBUG" /D "WIN32" /D "_WINDOWS" /D "ALL_INTERIOR_POINTERS" /YX /c
+CPP_PROJ=/nologo /MDd /W3 /Gm /GX /Zi /Od /I "." /I include /D "_DEBUG" /D "WIN32" /D\
+ "_WINDOWS" /D "ALL_INTERIOR_POINTERS" /Fp"$(INTDIR)/cord.pch" /YX\
+ /Fo"$(INTDIR)/" /Fd"$(INTDIR)/" /c
+CPP_OBJS=.\cord\Debug/
+CPP_SBRS=.\.
+
+.c{$(CPP_OBJS)}.obj:
+ $(CPP) $(CPP_PROJ) $<
+
+.cpp{$(CPP_OBJS)}.obj:
+ $(CPP) $(CPP_PROJ) $<
+
+.cxx{$(CPP_OBJS)}.obj:
+ $(CPP) $(CPP_PROJ) $<
+
+.c{$(CPP_SBRS)}.sbr:
+ $(CPP) $(CPP_PROJ) $<
+
+.cpp{$(CPP_SBRS)}.sbr:
+ $(CPP) $(CPP_PROJ) $<
+
+.cxx{$(CPP_SBRS)}.sbr:
+ $(CPP) $(CPP_PROJ) $<
+
+MTL=mktyplib.exe
+# ADD BASE MTL /nologo /D "_DEBUG" /win32
+# ADD MTL /nologo /D "_DEBUG" /win32
+MTL_PROJ=/nologo /D "_DEBUG" /win32
+RSC=rc.exe
+# ADD BASE RSC /l 0x809 /d "_DEBUG"
+# ADD RSC /l 0x809 /d "_DEBUG"
+RSC_PROJ=/l 0x809 /fo"$(INTDIR)/de_win.res" /d "_DEBUG"
+BSC32=bscmake.exe
+# ADD BASE BSC32 /nologo
+# ADD BSC32 /nologo
+BSC32_FLAGS=/nologo /o"$(OUTDIR)/cord.bsc"
+BSC32_SBRS= \
+
+LINK32=link.exe
+# ADD BASE LINK32 kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib /nologo /subsystem:windows /debug /machine:I386
+# ADD LINK32 kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib /nologo /subsystem:windows /debug /machine:I386 /out:"Debug/de.exe"
+LINK32_FLAGS=kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib\
+ advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib\
+ odbccp32.lib /nologo /subsystem:windows /incremental:yes\
+ /pdb:"$(OUTDIR)/de.pdb" /debug /machine:I386 /out:"Debug/de.exe"
+LINK32_OBJS= \
+ ".\cord\Debug\cordbscs.obj" \
+ ".\cord\Debug\cordxtra.obj" \
+ ".\cord\Debug\de.obj" \
+ ".\cord\Debug\de_win.obj" \
+ ".\cord\Debug\de_win.res" \
+ ".\Debug\gc.lib"
+
+".\Debug\de.exe" : "$(OUTDIR)" $(DEF_FILE) $(LINK32_OBJS)
+ $(LINK32) @<<
+ $(LINK32_FLAGS) $(LINK32_OBJS)
+<<
+
+!ENDIF
+
+################################################################################
+# Begin Target
+
+# Name "gc - Win32 Release"
+# Name "gc - Win32 Debug"
+
+!IF "$(CFG)" == "gc - Win32 Release"
+
+!ELSEIF "$(CFG)" == "gc - Win32 Debug"
+
+!ENDIF
+
+################################################################################
+# Begin Source File
+
+SOURCE=.\gc_cpp.cpp
+
+!IF "$(CFG)" == "gc - Win32 Release"
+
+DEP_CPP_RECLA=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ ".\include\gc_cpp.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_RECLA=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Release\gc_cpp.obj" : $(SOURCE) $(DEP_CPP_RECLA) "$(INTDIR)"
+
+".\Release\gc_cpp.sbr" : $(SOURCE) $(DEP_CPP_RECLA) "$(INTDIR)"
+
+
+!ELSEIF "$(CFG)" == "gc - Win32 Debug"
+
+DEP_CPP_RECLA=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ ".\include\gc_cpp.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_RECLA=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Debug\gc_cpp.obj" : $(SOURCE) $(DEP_CPP_RECLA) "$(INTDIR)"
+
+".\Debug\gc_cpp.sbr" : $(SOURCE) $(DEP_CPP_RECLA) "$(INTDIR)"
+
+
+!ENDIF
+
+# End Source File
+################################################################################
+# Begin Source File
+
+SOURCE=.\reclaim.c
+
+!IF "$(CFG)" == "gc - Win32 Release"
+
+DEP_CPP_RECLA=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_RECLA=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Release\reclaim.obj" : $(SOURCE) $(DEP_CPP_RECLA) "$(INTDIR)"
+
+".\Release\reclaim.sbr" : $(SOURCE) $(DEP_CPP_RECLA) "$(INTDIR)"
+
+
+!ELSEIF "$(CFG)" == "gc - Win32 Debug"
+
+DEP_CPP_RECLA=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_RECLA=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Debug\reclaim.obj" : $(SOURCE) $(DEP_CPP_RECLA) "$(INTDIR)"
+
+".\Debug\reclaim.sbr" : $(SOURCE) $(DEP_CPP_RECLA) "$(INTDIR)"
+
+
+!ENDIF
+
+# End Source File
+
+################################################################################
+# Begin Source File
+
+SOURCE=.\os_dep.c
+
+!IF "$(CFG)" == "gc - Win32 Release"
+
+DEP_CPP_OS_DE=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\STAT.H"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_OS_DE=\
+ ".\il\PCR_IL.h"\
+ ".\mm\PCR_MM.h"\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+ ".\vd\PCR_VD.h"\
+
+
+".\Release\os_dep.obj" : $(SOURCE) $(DEP_CPP_OS_DE) "$(INTDIR)"
+
+".\Release\os_dep.sbr" : $(SOURCE) $(DEP_CPP_OS_DE) "$(INTDIR)"
+
+
+!ELSEIF "$(CFG)" == "gc - Win32 Debug"
+
+DEP_CPP_OS_DE=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\STAT.H"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_OS_DE=\
+ ".\il\PCR_IL.h"\
+ ".\mm\PCR_MM.h"\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+ ".\vd\PCR_VD.h"\
+
+
+".\Debug\os_dep.obj" : $(SOURCE) $(DEP_CPP_OS_DE) "$(INTDIR)"
+
+".\Debug\os_dep.sbr" : $(SOURCE) $(DEP_CPP_OS_DE) "$(INTDIR)"
+
+
+!ENDIF
+
+# End Source File
+################################################################################
+# Begin Source File
+
+SOURCE=.\misc.c
+
+!IF "$(CFG)" == "gc - Win32 Release"
+
+DEP_CPP_MISC_=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_MISC_=\
+ ".\il\PCR_IL.h"\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Release\misc.obj" : $(SOURCE) $(DEP_CPP_MISC_) "$(INTDIR)"
+
+".\Release\misc.sbr" : $(SOURCE) $(DEP_CPP_MISC_) "$(INTDIR)"
+
+
+!ELSEIF "$(CFG)" == "gc - Win32 Debug"
+
+DEP_CPP_MISC_=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_MISC_=\
+ ".\il\PCR_IL.h"\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Debug\misc.obj" : $(SOURCE) $(DEP_CPP_MISC_) "$(INTDIR)"
+
+".\Debug\misc.sbr" : $(SOURCE) $(DEP_CPP_MISC_) "$(INTDIR)"
+
+
+!ENDIF
+
+# End Source File
+################################################################################
+# Begin Source File
+
+SOURCE=.\mark_rts.c
+
+!IF "$(CFG)" == "gc - Win32 Release"
+
+DEP_CPP_MARK_=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_MARK_=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Release\mark_rts.obj" : $(SOURCE) $(DEP_CPP_MARK_) "$(INTDIR)"
+
+".\Release\mark_rts.sbr" : $(SOURCE) $(DEP_CPP_MARK_) "$(INTDIR)"
+
+
+!ELSEIF "$(CFG)" == "gc - Win32 Debug"
+
+DEP_CPP_MARK_=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_MARK_=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Debug\mark_rts.obj" : $(SOURCE) $(DEP_CPP_MARK_) "$(INTDIR)"
+
+".\Debug\mark_rts.sbr" : $(SOURCE) $(DEP_CPP_MARK_) "$(INTDIR)"
+
+
+!ENDIF
+
+# End Source File
+################################################################################
+# Begin Source File
+
+SOURCE=.\mach_dep.c
+
+!IF "$(CFG)" == "gc - Win32 Release"
+
+DEP_CPP_MACH_=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_MACH_=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Release\mach_dep.obj" : $(SOURCE) $(DEP_CPP_MACH_) "$(INTDIR)"
+
+".\Release\mach_dep.sbr" : $(SOURCE) $(DEP_CPP_MACH_) "$(INTDIR)"
+
+
+!ELSEIF "$(CFG)" == "gc - Win32 Debug"
+
+DEP_CPP_MACH_=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_MACH_=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Debug\mach_dep.obj" : $(SOURCE) $(DEP_CPP_MACH_) "$(INTDIR)"
+
+".\Debug\mach_dep.sbr" : $(SOURCE) $(DEP_CPP_MACH_) "$(INTDIR)"
+
+
+!ENDIF
+
+# End Source File
+################################################################################
+# Begin Source File
+
+SOURCE=.\headers.c
+
+!IF "$(CFG)" == "gc - Win32 Release"
+
+DEP_CPP_HEADE=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_HEADE=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Release\headers.obj" : $(SOURCE) $(DEP_CPP_HEADE) "$(INTDIR)"
+
+".\Release\headers.sbr" : $(SOURCE) $(DEP_CPP_HEADE) "$(INTDIR)"
+
+
+!ELSEIF "$(CFG)" == "gc - Win32 Debug"
+
+DEP_CPP_HEADE=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_HEADE=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Debug\headers.obj" : $(SOURCE) $(DEP_CPP_HEADE) "$(INTDIR)"
+
+".\Debug\headers.sbr" : $(SOURCE) $(DEP_CPP_HEADE) "$(INTDIR)"
+
+
+!ENDIF
+
+# End Source File
+################################################################################
+# Begin Source File
+
+SOURCE=.\alloc.c
+
+!IF "$(CFG)" == "gc - Win32 Release"
+
+DEP_CPP_ALLOC=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_ALLOC=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Release\alloc.obj" : $(SOURCE) $(DEP_CPP_ALLOC) "$(INTDIR)"
+
+".\Release\alloc.sbr" : $(SOURCE) $(DEP_CPP_ALLOC) "$(INTDIR)"
+
+
+!ELSEIF "$(CFG)" == "gc - Win32 Debug"
+
+DEP_CPP_ALLOC=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_ALLOC=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Debug\alloc.obj" : $(SOURCE) $(DEP_CPP_ALLOC) "$(INTDIR)"
+
+".\Debug\alloc.sbr" : $(SOURCE) $(DEP_CPP_ALLOC) "$(INTDIR)"
+
+
+!ENDIF
+
+# End Source File
+################################################################################
+# Begin Source File
+
+SOURCE=.\allchblk.c
+
+!IF "$(CFG)" == "gc - Win32 Release"
+
+DEP_CPP_ALLCH=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_ALLCH=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Release\allchblk.obj" : $(SOURCE) $(DEP_CPP_ALLCH) "$(INTDIR)"
+
+".\Release\allchblk.sbr" : $(SOURCE) $(DEP_CPP_ALLCH) "$(INTDIR)"
+
+
+!ELSEIF "$(CFG)" == "gc - Win32 Debug"
+
+DEP_CPP_ALLCH=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_ALLCH=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Debug\allchblk.obj" : $(SOURCE) $(DEP_CPP_ALLCH) "$(INTDIR)"
+
+".\Debug\allchblk.sbr" : $(SOURCE) $(DEP_CPP_ALLCH) "$(INTDIR)"
+
+
+!ENDIF
+
+# End Source File
+################################################################################
+# Begin Source File
+
+SOURCE=.\stubborn.c
+
+!IF "$(CFG)" == "gc - Win32 Release"
+
+DEP_CPP_STUBB=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_STUBB=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Release\stubborn.obj" : $(SOURCE) $(DEP_CPP_STUBB) "$(INTDIR)"
+
+".\Release\stubborn.sbr" : $(SOURCE) $(DEP_CPP_STUBB) "$(INTDIR)"
+
+
+!ELSEIF "$(CFG)" == "gc - Win32 Debug"
+
+DEP_CPP_STUBB=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_STUBB=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Debug\stubborn.obj" : $(SOURCE) $(DEP_CPP_STUBB) "$(INTDIR)"
+
+".\Debug\stubborn.sbr" : $(SOURCE) $(DEP_CPP_STUBB) "$(INTDIR)"
+
+
+!ENDIF
+
+# End Source File
+################################################################################
+# Begin Source File
+
+SOURCE=.\obj_map.c
+
+!IF "$(CFG)" == "gc - Win32 Release"
+
+DEP_CPP_OBJ_M=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_OBJ_M=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Release\obj_map.obj" : $(SOURCE) $(DEP_CPP_OBJ_M) "$(INTDIR)"
+
+".\Release\obj_map.sbr" : $(SOURCE) $(DEP_CPP_OBJ_M) "$(INTDIR)"
+
+
+!ELSEIF "$(CFG)" == "gc - Win32 Debug"
+
+DEP_CPP_OBJ_M=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_OBJ_M=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Debug\obj_map.obj" : $(SOURCE) $(DEP_CPP_OBJ_M) "$(INTDIR)"
+
+".\Debug\obj_map.sbr" : $(SOURCE) $(DEP_CPP_OBJ_M) "$(INTDIR)"
+
+
+!ENDIF
+
+# End Source File
+################################################################################
+# Begin Source File
+
+SOURCE=.\new_hblk.c
+
+!IF "$(CFG)" == "gc - Win32 Release"
+
+DEP_CPP_NEW_H=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_NEW_H=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Release\new_hblk.obj" : $(SOURCE) $(DEP_CPP_NEW_H) "$(INTDIR)"
+
+".\Release\new_hblk.sbr" : $(SOURCE) $(DEP_CPP_NEW_H) "$(INTDIR)"
+
+
+!ELSEIF "$(CFG)" == "gc - Win32 Debug"
+
+DEP_CPP_NEW_H=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_NEW_H=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Debug\new_hblk.obj" : $(SOURCE) $(DEP_CPP_NEW_H) "$(INTDIR)"
+
+".\Debug\new_hblk.sbr" : $(SOURCE) $(DEP_CPP_NEW_H) "$(INTDIR)"
+
+
+!ENDIF
+
+# End Source File
+################################################################################
+# Begin Source File
+
+SOURCE=.\mark.c
+
+!IF "$(CFG)" == "gc - Win32 Release"
+
+DEP_CPP_MARK_C=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_pmark.h"\
+ ".\include\gc_mark.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_MARK_C=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Release\mark.obj" : $(SOURCE) $(DEP_CPP_MARK_C) "$(INTDIR)"
+
+".\Release\mark.sbr" : $(SOURCE) $(DEP_CPP_MARK_C) "$(INTDIR)"
+
+
+!ELSEIF "$(CFG)" == "gc - Win32 Debug"
+
+DEP_CPP_MARK_C=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_pmark.h"\
+ ".\include\gc_mark.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_MARK_C=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Debug\mark.obj" : $(SOURCE) $(DEP_CPP_MARK_C) "$(INTDIR)"
+
+".\Debug\mark.sbr" : $(SOURCE) $(DEP_CPP_MARK_C) "$(INTDIR)"
+
+
+!ENDIF
+
+# End Source File
+################################################################################
+# Begin Source File
+
+SOURCE=.\malloc.c
+
+!IF "$(CFG)" == "gc - Win32 Release"
+
+DEP_CPP_MALLO=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_MALLO=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Release\malloc.obj" : $(SOURCE) $(DEP_CPP_MALLO) "$(INTDIR)"
+
+".\Release\malloc.sbr" : $(SOURCE) $(DEP_CPP_MALLO) "$(INTDIR)"
+
+
+!ELSEIF "$(CFG)" == "gc - Win32 Debug"
+
+DEP_CPP_MALLO=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_MALLO=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Debug\malloc.obj" : $(SOURCE) $(DEP_CPP_MALLO) "$(INTDIR)"
+
+".\Debug\malloc.sbr" : $(SOURCE) $(DEP_CPP_MALLO) "$(INTDIR)"
+
+
+!ENDIF
+
+# End Source File
+################################################################################
+# Begin Source File
+
+SOURCE=.\mallocx.c
+
+!IF "$(CFG)" == "gc - Win32 Release"
+
+DEP_CPP_MALLX=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_MALLX=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Release\mallocx.obj" : $(SOURCE) $(DEP_CPP_MALLX) "$(INTDIR)"
+
+".\Release\mallocx.sbr" : $(SOURCE) $(DEP_CPP_MALLX) "$(INTDIR)"
+
+
+!ELSEIF "$(CFG)" == "gc - Win32 Debug"
+
+DEP_CPP_MALLX=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_MALLX=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Debug\mallocx.obj" : $(SOURCE) $(DEP_CPP_MALLX) "$(INTDIR)"
+
+".\Debug\mallocx.sbr" : $(SOURCE) $(DEP_CPP_MALLX) "$(INTDIR)"
+
+
+!ENDIF
+
+# End Source File
+################################################################################
+# Begin Source File
+
+SOURCE=.\finalize.c
+
+!IF "$(CFG)" == "gc - Win32 Release"
+
+DEP_CPP_FINAL=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_pmark.h"\
+ ".\include\gc_mark.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_FINAL=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Release\finalize.obj" : $(SOURCE) $(DEP_CPP_FINAL) "$(INTDIR)"
+
+".\Release\finalize.sbr" : $(SOURCE) $(DEP_CPP_FINAL) "$(INTDIR)"
+
+
+!ELSEIF "$(CFG)" == "gc - Win32 Debug"
+
+DEP_CPP_FINAL=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_pmark.h"\
+ ".\include\gc_mark.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_FINAL=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Debug\finalize.obj" : $(SOURCE) $(DEP_CPP_FINAL) "$(INTDIR)"
+
+".\Debug\finalize.sbr" : $(SOURCE) $(DEP_CPP_FINAL) "$(INTDIR)"
+
+
+!ENDIF
+
+# End Source File
+################################################################################
+# Begin Source File
+
+SOURCE=.\dbg_mlc.c
+
+!IF "$(CFG)" == "gc - Win32 Release"
+
+DEP_CPP_DBG_M=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_DBG_M=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Release\dbg_mlc.obj" : $(SOURCE) $(DEP_CPP_DBG_M) "$(INTDIR)"
+
+".\Release\dbg_mlc.sbr" : $(SOURCE) $(DEP_CPP_DBG_M) "$(INTDIR)"
+
+
+!ELSEIF "$(CFG)" == "gc - Win32 Debug"
+
+DEP_CPP_DBG_M=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_DBG_M=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Debug\dbg_mlc.obj" : $(SOURCE) $(DEP_CPP_DBG_M) "$(INTDIR)"
+
+".\Debug\dbg_mlc.sbr" : $(SOURCE) $(DEP_CPP_DBG_M) "$(INTDIR)"
+
+
+!ENDIF
+
+# End Source File
+################################################################################
+# Begin Source File
+
+SOURCE=.\blacklst.c
+
+!IF "$(CFG)" == "gc - Win32 Release"
+
+DEP_CPP_BLACK=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_BLACK=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Release\blacklst.obj" : $(SOURCE) $(DEP_CPP_BLACK) "$(INTDIR)"
+
+".\Release\blacklst.sbr" : $(SOURCE) $(DEP_CPP_BLACK) "$(INTDIR)"
+
+
+!ELSEIF "$(CFG)" == "gc - Win32 Debug"
+
+DEP_CPP_BLACK=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_BLACK=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Debug\blacklst.obj" : $(SOURCE) $(DEP_CPP_BLACK) "$(INTDIR)"
+
+".\Debug\blacklst.sbr" : $(SOURCE) $(DEP_CPP_BLACK) "$(INTDIR)"
+
+
+!ENDIF
+
+# End Source File
+################################################################################
+# Begin Source File
+
+SOURCE=.\typd_mlc.c
+
+!IF "$(CFG)" == "gc - Win32 Release"
+
+DEP_CPP_TYPD_=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_pmark.h"\
+ ".\include\gc_mark.h"\
+ ".\include\private\gc_priv.h"\
+ ".\include\gc_typed.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_TYPD_=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Release\typd_mlc.obj" : $(SOURCE) $(DEP_CPP_TYPD_) "$(INTDIR)"
+
+".\Release\typd_mlc.sbr" : $(SOURCE) $(DEP_CPP_TYPD_) "$(INTDIR)"
+
+
+!ELSEIF "$(CFG)" == "gc - Win32 Debug"
+
+DEP_CPP_TYPD_=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_pmark.h"\
+ ".\include\gc_mark.h"\
+ ".\include\private\gc_priv.h"\
+ ".\include\gc_typed.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_TYPD_=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Debug\typd_mlc.obj" : $(SOURCE) $(DEP_CPP_TYPD_) "$(INTDIR)"
+
+".\Debug\typd_mlc.sbr" : $(SOURCE) $(DEP_CPP_TYPD_) "$(INTDIR)"
+
+
+!ENDIF
+
+# End Source File
+################################################################################
+# Begin Source File
+
+SOURCE=.\ptr_chck.c
+
+!IF "$(CFG)" == "gc - Win32 Release"
+
+DEP_CPP_PTR_C=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_pmark.h"\
+ ".\include\gc_mark.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_PTR_C=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Release\ptr_chck.obj" : $(SOURCE) $(DEP_CPP_PTR_C) "$(INTDIR)"
+
+".\Release\ptr_chck.sbr" : $(SOURCE) $(DEP_CPP_PTR_C) "$(INTDIR)"
+
+
+!ELSEIF "$(CFG)" == "gc - Win32 Debug"
+
+DEP_CPP_PTR_C=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_pmark.h"\
+ ".\include\gc_mark.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_PTR_C=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Debug\ptr_chck.obj" : $(SOURCE) $(DEP_CPP_PTR_C) "$(INTDIR)"
+
+".\Debug\ptr_chck.sbr" : $(SOURCE) $(DEP_CPP_PTR_C) "$(INTDIR)"
+
+
+!ENDIF
+
+# End Source File
+################################################################################
+# Begin Source File
+
+SOURCE=.\dyn_load.c
+
+!IF "$(CFG)" == "gc - Win32 Release"
+
+DEP_CPP_DYN_L=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\STAT.H"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_DYN_L=\
+ ".\il\PCR_IL.h"\
+ ".\mm\PCR_MM.h"\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Release\dyn_load.obj" : $(SOURCE) $(DEP_CPP_DYN_L) "$(INTDIR)"
+
+".\Release\dyn_load.sbr" : $(SOURCE) $(DEP_CPP_DYN_L) "$(INTDIR)"
+
+
+!ELSEIF "$(CFG)" == "gc - Win32 Debug"
+
+DEP_CPP_DYN_L=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\STAT.H"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_DYN_L=\
+ ".\il\PCR_IL.h"\
+ ".\mm\PCR_MM.h"\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Debug\dyn_load.obj" : $(SOURCE) $(DEP_CPP_DYN_L) "$(INTDIR)"
+
+".\Debug\dyn_load.sbr" : $(SOURCE) $(DEP_CPP_DYN_L) "$(INTDIR)"
+
+
+!ENDIF
+
+# End Source File
+################################################################################
+# Begin Source File
+
+SOURCE=.\win32_threads.c
+
+!IF "$(CFG)" == "gc - Win32 Release"
+
+DEP_CPP_WIN32=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_WIN32=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Release\win32_threads.obj" : $(SOURCE) $(DEP_CPP_WIN32) "$(INTDIR)"
+
+".\Release\win32_threads.sbr" : $(SOURCE) $(DEP_CPP_WIN32) "$(INTDIR)"
+
+
+!ELSEIF "$(CFG)" == "gc - Win32 Debug"
+
+DEP_CPP_WIN32=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_WIN32=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Debug\win32_threads.obj" : $(SOURCE) $(DEP_CPP_WIN32) "$(INTDIR)"
+
+".\Debug\win32_threads.sbr" : $(SOURCE) $(DEP_CPP_WIN32) "$(INTDIR)"
+
+
+!ENDIF
+
+# End Source File
+################################################################################
+# Begin Source File
+
+SOURCE=.\checksums.c
+
+!IF "$(CFG)" == "gc - Win32 Release"
+
+DEP_CPP_CHECK=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_CHECK=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Release\checksums.obj" : $(SOURCE) $(DEP_CPP_CHECK) "$(INTDIR)"
+
+".\Release\checksums.sbr" : $(SOURCE) $(DEP_CPP_CHECK) "$(INTDIR)"
+
+
+!ELSEIF "$(CFG)" == "gc - Win32 Debug"
+
+DEP_CPP_CHECK=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_CHECK=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+".\Debug\checksums.obj" : $(SOURCE) $(DEP_CPP_CHECK) "$(INTDIR)"
+
+".\Debug\checksums.sbr" : $(SOURCE) $(DEP_CPP_CHECK) "$(INTDIR)"
+
+
+!ENDIF
+
+# End Source File
+# End Target
+################################################################################
+# Begin Target
+
+# Name "gctest - Win32 Release"
+# Name "gctest - Win32 Debug"
+
+!IF "$(CFG)" == "gctest - Win32 Release"
+
+!ELSEIF "$(CFG)" == "gctest - Win32 Debug"
+
+!ENDIF
+
+################################################################################
+# Begin Project Dependency
+
+# Project_Dep_Name "gc"
+
+!IF "$(CFG)" == "gctest - Win32 Release"
+
+"gc - Win32 Release" :
+ $(MAKE) /$(MAKEFLAGS) /F ".\gc.mak" CFG="gc - Win32 Release"
+
+!ELSEIF "$(CFG)" == "gctest - Win32 Debug"
+
+"gc - Win32 Debug" :
+ $(MAKE) /$(MAKEFLAGS) /F ".\gc.mak" CFG="gc - Win32 Debug"
+
+!ENDIF
+
+# End Project Dependency
+################################################################################
+# Begin Source File
+
+SOURCE=.\tests\test.c
+DEP_CPP_TEST_=\
+ ".\include\private\gcconfig.h"\
+ ".\include\gc.h"\
+ ".\include\private\gc_hdrs.h"\
+ ".\include\private\gc_priv.h"\
+ ".\include\gc_typed.h"\
+ {$(INCLUDE)}"\sys\TYPES.H"\
+
+NODEP_CPP_TEST_=\
+ ".\th\PCR_Th.h"\
+ ".\th\PCR_ThCrSec.h"\
+ ".\th\PCR_ThCtl.h"\
+
+
+!IF "$(CFG)" == "gctest - Win32 Release"
+
+
+".\gctest\Release\test.obj" : $(SOURCE) $(DEP_CPP_TEST_) "$(INTDIR)"
+
+
+!ELSEIF "$(CFG)" == "gctest - Win32 Debug"
+
+
+".\gctest\Debug\test.obj" : $(SOURCE) $(DEP_CPP_TEST_) "$(INTDIR)"
+
+".\gctest\Debug\test.sbr" : $(SOURCE) $(DEP_CPP_TEST_) "$(INTDIR)"
+
+
+!ENDIF
+
+# End Source File
+# End Target
+################################################################################
+# Begin Target
+
+# Name "cord - Win32 Release"
+# Name "cord - Win32 Debug"
+
+!IF "$(CFG)" == "cord - Win32 Release"
+
+!ELSEIF "$(CFG)" == "cord - Win32 Debug"
+
+!ENDIF
+
+################################################################################
+# Begin Project Dependency
+
+# Project_Dep_Name "gc"
+
+!IF "$(CFG)" == "cord - Win32 Release"
+
+"gc - Win32 Release" :
+ $(MAKE) /$(MAKEFLAGS) /F ".\gc.mak" CFG="gc - Win32 Release"
+
+!ELSEIF "$(CFG)" == "cord - Win32 Debug"
+
+"gc - Win32 Debug" :
+ $(MAKE) /$(MAKEFLAGS) /F ".\gc.mak" CFG="gc - Win32 Debug"
+
+!ENDIF
+
+# End Project Dependency
+################################################################################
+# Begin Source File
+
+SOURCE=.\cord\de_win.c
+DEP_CPP_DE_WI=\
+ ".\include\cord.h"\
+ ".\cord\de_cmds.h"\
+ ".\cord\de_win.h"\
+ ".\include\private\cord_pos.h"\
+
+NODEP_CPP_DE_WI=\
+ ".\include\gc.h"\
+
+
+!IF "$(CFG)" == "cord - Win32 Release"
+
+
+".\cord\Release\de_win.obj" : $(SOURCE) $(DEP_CPP_DE_WI) "$(INTDIR)"
+ $(CPP) $(CPP_PROJ) $(SOURCE)
+
+
+!ELSEIF "$(CFG)" == "cord - Win32 Debug"
+
+
+".\cord\Debug\de_win.obj" : $(SOURCE) $(DEP_CPP_DE_WI) "$(INTDIR)"
+ $(CPP) $(CPP_PROJ) $(SOURCE)
+
+
+!ENDIF
+
+# End Source File
+################################################################################
+# Begin Source File
+
+SOURCE=.\cord\de.c
+DEP_CPP_DE_C2e=\
+ ".\include\cord.h"\
+ ".\cord\de_cmds.h"\
+ ".\cord\de_win.h"\
+ ".\include\private\cord_pos.h"\
+
+NODEP_CPP_DE_C2e=\
+ ".\include\gc.h"\
+
+
+!IF "$(CFG)" == "cord - Win32 Release"
+
+
+".\cord\Release\de.obj" : $(SOURCE) $(DEP_CPP_DE_C2e) "$(INTDIR)"
+ $(CPP) $(CPP_PROJ) $(SOURCE)
+
+
+!ELSEIF "$(CFG)" == "cord - Win32 Debug"
+
+
+".\cord\Debug\de.obj" : $(SOURCE) $(DEP_CPP_DE_C2e) "$(INTDIR)"
+ $(CPP) $(CPP_PROJ) $(SOURCE)
+
+
+!ENDIF
+
+# End Source File
+################################################################################
+# Begin Source File
+
+SOURCE=.\cord\cordxtra.c
+DEP_CPP_CORDX=\
+ ".\include\cord.h"\
+ ".\include\ec.h"\
+ ".\include\private\cord_pos.h"\
+
+NODEP_CPP_CORDX=\
+ ".\include\gc.h"\
+
+
+!IF "$(CFG)" == "cord - Win32 Release"
+
+
+".\cord\Release\cordxtra.obj" : $(SOURCE) $(DEP_CPP_CORDX) "$(INTDIR)"
+ $(CPP) $(CPP_PROJ) $(SOURCE)
+
+
+!ELSEIF "$(CFG)" == "cord - Win32 Debug"
+
+
+".\cord\Debug\cordxtra.obj" : $(SOURCE) $(DEP_CPP_CORDX) "$(INTDIR)"
+ $(CPP) $(CPP_PROJ) $(SOURCE)
+
+
+!ENDIF
+
+# End Source File
+################################################################################
+# Begin Source File
+
+SOURCE=.\cord\cordbscs.c
+DEP_CPP_CORDB=\
+ ".\include\cord.h"\
+ ".\include\private\cord_pos.h"\
+
+NODEP_CPP_CORDB=\
+ ".\include\gc.h"\
+
+
+!IF "$(CFG)" == "cord - Win32 Release"
+
+
+".\cord\Release\cordbscs.obj" : $(SOURCE) $(DEP_CPP_CORDB) "$(INTDIR)"
+ $(CPP) $(CPP_PROJ) $(SOURCE)
+
+
+!ELSEIF "$(CFG)" == "cord - Win32 Debug"
+
+
+".\cord\Debug\cordbscs.obj" : $(SOURCE) $(DEP_CPP_CORDB) "$(INTDIR)"
+ $(CPP) $(CPP_PROJ) $(SOURCE)
+
+
+!ENDIF
+
+# End Source File
+################################################################################
+# Begin Source File
+
+SOURCE=.\cord\de_win.RC
+
+!IF "$(CFG)" == "cord - Win32 Release"
+
+
+".\cord\Release\de_win.res" : $(SOURCE) "$(INTDIR)"
+ $(RSC) /l 0x809 /fo"$(INTDIR)/de_win.res" /i "cord" /d "NDEBUG" $(SOURCE)
+
+
+!ELSEIF "$(CFG)" == "cord - Win32 Debug"
+
+
+".\cord\Debug\de_win.res" : $(SOURCE) "$(INTDIR)"
+ $(RSC) /l 0x809 /fo"$(INTDIR)/de_win.res" /i "cord" /d "_DEBUG" $(SOURCE)
+
+
+!ENDIF
+
+# End Source File
+# End Target
+# End Project
+################################################################################
Added: llvm-gcc-4.2/trunk/boehm-gc/gc_cpp.cc
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/gc_cpp.cc?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/gc_cpp.cc (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/gc_cpp.cc Thu Nov 8 16:56:19 2007
@@ -0,0 +1,61 @@
+/*************************************************************************
+Copyright (c) 1994 by Xerox Corporation. All rights reserved.
+
+THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+
+ Last modified on Sat Nov 19 19:31:14 PST 1994 by ellis
+ on Sat Jun 8 15:10:00 PST 1994 by boehm
+
+Permission is hereby granted to copy this code for any purpose,
+provided the above notices are retained on all copies.
+
+This implementation module for gc_c++.h provides an implementation of
+the global operators "new" and "delete" that calls the Boehm
+allocator. All objects allocated by this implementation will be
+non-collectable but part of the root set of the collector.
+
+You should ensure (using implementation-dependent techniques) that the
+linker finds this module before the library that defines the default
+built-in "new" and "delete".
+
+Authors: John R. Ellis and Jesse Hull
+
+**************************************************************************/
+/* Boehm, December 20, 1994 7:26 pm PST */
+
+#include "gc_cpp.h"
+
+void* operator new( size_t size ) {
+ return GC_MALLOC_UNCOLLECTABLE( size );}
+
+void operator delete( void* obj ) {
+ GC_FREE( obj );}
+
+#ifdef GC_OPERATOR_NEW_ARRAY
+
+void* operator new[]( size_t size ) {
+ return GC_MALLOC_UNCOLLECTABLE( size );}
+
+void operator delete[]( void* obj ) {
+ GC_FREE( obj );}
+
+#endif /* GC_OPERATOR_NEW_ARRAY */
+
+#ifdef _MSC_VER
+
+// This new operator is used by VC++ in case of Debug builds !
+void* operator new( size_t size,
+ int ,//nBlockUse,
+ const char * szFileName,
+ int nLine )
+{
+#ifndef GC_DEBUG
+ return GC_malloc_uncollectable( size );
+#else
+ return GC_debug_malloc_uncollectable(size, szFileName, nLine);
+#endif
+}
+
+#endif /* _MSC_VER */
+
Added: llvm-gcc-4.2/trunk/boehm-gc/gc_cpp.cpp
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/gc_cpp.cpp?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/gc_cpp.cpp (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/gc_cpp.cpp Thu Nov 8 16:56:19 2007
@@ -0,0 +1,2 @@
+// Visual C++ seems to prefer a .cpp extension to .cc
+#include "gc_cpp.cc"
Added: llvm-gcc-4.2/trunk/boehm-gc/gc_dlopen.c
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/gc_dlopen.c?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/gc_dlopen.c (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/gc_dlopen.c Thu Nov 8 16:56:19 2007
@@ -0,0 +1,91 @@
+/*
+ * Copyright (c) 1991-1994 by Xerox Corporation. All rights reserved.
+ * Copyright (c) 1997 by Silicon Graphics. All rights reserved.
+ * Copyright (c) 2000 by Hewlett-Packard Company. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ *
+ * Original author: Bill Janssen
+ * Heavily modified by Hans Boehm and others
+ */
+
+/*
+ * This used to be in dyn_load.c. It was extracted into a separate file
+ * to avoid having to link against libdl.{a,so} if the client doesn't call
+ * dlopen. Of course this fails if the collector is in a dynamic
+ * library. -HB
+ */
+
+#include "private/gc_priv.h"
+
+# if (defined(GC_PTHREADS) && !defined(GC_DARWIN_THREADS)) \
+ || defined(GC_SOLARIS_THREADS)
+
+# if defined(dlopen) && !defined(GC_USE_LD_WRAP)
+ /* To support various threads pkgs, gc.h interposes on dlopen by */
+ /* defining "dlopen" to be "GC_dlopen", which is implemented below. */
+ /* However, both GC_FirstDLOpenedLinkMap() and GC_dlopen() use the */
+ /* real system dlopen() in their implementation. We first remove */
+ /* gc.h's dlopen definition and restore it later, after GC_dlopen(). */
+# undef dlopen
+# endif
+
+ /* Make sure we're not in the middle of a collection, and make */
+ /* sure we don't start any. Returns previous value of GC_dont_gc. */
+ /* This is invoked prior to a dlopen call to avoid synchronization */
+ /* issues. We can't just acquire the allocation lock, since startup */
+ /* code in dlopen may try to allocate. */
+ /* This solution risks heap growth in the presence of many dlopen */
+ /* calls in either a multithreaded environment, or if the library */
+ /* initialization code allocates substantial amounts of GC'ed memory. */
+ /* But I don't know of a better solution. */
+ static void disable_gc_for_dlopen()
+ {
+ LOCK();
+ while (GC_incremental && GC_collection_in_progress()) {
+ GC_collect_a_little_inner(1000);
+ }
+ ++GC_dont_gc;
+ UNLOCK();
+ }
+
+ /* Redefine dlopen to guarantee mutual exclusion with */
+ /* GC_register_dynamic_libraries. */
+ /* Should probably happen for other operating systems, too. */
+
+#include <dlfcn.h>
+
+#ifdef GC_USE_LD_WRAP
+ void * __wrap_dlopen(const char *path, int mode)
+#else
+ void * GC_dlopen(path, mode)
+ GC_CONST char * path;
+ int mode;
+#endif
+{
+ void * result;
+
+# ifndef USE_PROC_FOR_LIBRARIES
+ disable_gc_for_dlopen();
+# endif
+# ifdef GC_USE_LD_WRAP
+ result = (void *)__real_dlopen(path, mode);
+# else
+ result = dlopen(path, mode);
+# endif
+# ifndef USE_PROC_FOR_LIBRARIES
+ GC_enable(); /* undoes disable_gc_for_dlopen */
+# endif
+ return(result);
+}
+# endif /* GC_PTHREADS || GC_SOLARIS_THREADS ... */
+
+
+
Added: llvm-gcc-4.2/trunk/boehm-gc/gcc_support.c
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/gcc_support.c?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/gcc_support.c (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/gcc_support.c Thu Nov 8 16:56:19 2007
@@ -0,0 +1,516 @@
+/***************************************************************************
+
+Interface between g++ and Boehm GC
+
+ Copyright (c) 1991-1995 by Xerox Corporation. All rights reserved.
+
+ THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+
+ Permission is hereby granted to copy this code for any purpose,
+ provided the above notices are retained on all copies.
+
+ Last modified on Sun Jul 16 23:21:14 PDT 1995 by ellis
+
+This module provides runtime support for implementing the
+Ellis/Detlefs GC proposal, "Safe, Efficient Garbage Collection for
+C++", within g++, using its -fgc-keyword extension. It defines
+versions of __builtin_new, __builtin_new_gc, __builtin_vec_new,
+__builtin_vec_new_gc, __builtin_delete, and __builtin_vec_delete that
+invoke the Bohem GC. It also implements the WeakPointer.h interface.
+
+This module assumes the following configuration options of the Boehm GC:
+
+ -DALL_INTERIOR_POINTERS
+ -DDONT_ADD_BYTE_AT_END
+
+This module adds its own required padding to the end of objects to
+support C/C++ "one-past-the-object" pointer semantics.
+
+****************************************************************************/
+
+#include <stddef.h>
+#include "gc.h"
+
+#if defined(__STDC__)
+# define PROTO( args ) args
+#else
+# define PROTO( args ) ()
+# endif
+
+#define BITSPERBYTE 8
+ /* What's the portable way to do this? */
+
+
+typedef void (*vfp) PROTO(( void ));
+extern vfp __new_handler;
+extern void __default_new_handler PROTO(( void ));
+
+
+/* A destructor_proc is the compiler generated procedure representing a
+C++ destructor. The "flag" argument is a hidden argument following some
+compiler convention. */
+
+typedef (*destructor_proc) PROTO(( void* this, int flag ));
+
+
+/***************************************************************************
+
+A BI_header is the header the compiler adds to the front of
+new-allocated arrays of objects with destructors. The header is
+padded out to a double, because that's what the compiler does to
+ensure proper alignment of array elements on some architectures.
+
+int NUM_ARRAY_ELEMENTS (void* o)
+ returns the number of array elements for array object o.
+
+char* FIRST_ELEMENT_P (void* o)
+ returns the address of the first element of array object o.
+
+***************************************************************************/
+
+typedef struct BI_header {
+ int nelts;
+ char padding [sizeof( double ) - sizeof( int )];
+ /* Better way to do this? */
+} BI_header;
+
+#define NUM_ARRAY_ELEMENTS( o ) \
+ (((BI_header*) o)->nelts)
+
+#define FIRST_ELEMENT_P( o ) \
+ ((char*) o + sizeof( BI_header ))
+
+
+/***************************************************************************
+
+The __builtin_new routines add a descriptor word to the end of each
+object. The descriptor serves two purposes.
+
+First, the descriptor acts as padding, implementing C/C++ pointer
+semantics. C and C++ allow a valid array pointer to be incremented
+one past the end of an object. The extra padding ensures that the
+collector will recognize that such a pointer points to the object and
+not the next object in memory.
+
+Second, the descriptor stores three extra pieces of information,
+whether an object has a registered finalizer (destructor), whether it
+may have any weak pointers referencing it, and for collectible arrays,
+the element size of the array. The element size is required for the
+array's finalizer to iterate through the elements of the array. (An
+alternative design would have the compiler generate a finalizer
+procedure for each different array type. But given the overhead of
+finalization, there isn't any efficiency to be gained by that.)
+
+The descriptor must be added to non-collectible as well as collectible
+objects, since the Ellis/Detlefs proposal allows "pointer to gc T" to
+be assigned to a "pointer to T", which could then be deleted. Thus,
+__builtin_delete must determine at runtime whether an object is
+collectible, whether it has weak pointers referencing it, and whether
+it may have a finalizer that needs unregistering. Though
+GC_REGISTER_FINALIZER doesn't care if you ask it to unregister a
+finalizer for an object that doesn't have one, it is a non-trivial
+procedure that does a hash look-up, etc. The descriptor trades a
+little extra space for a significant increase in time on the fast path
+through delete. (A similar argument applies to
+GC_UNREGISTER_DISAPPEARING_LINK).
+
+For non-array types, the space for the descriptor could be shrunk to a
+single byte for storing the "has finalizer" flag. But this would save
+space only on arrays of char (whose size is not a multiple of the word
+size) and structs whose largest member is less than a word in size
+(very infrequent). And it would require that programmers actually
+remember to call "delete[]" instead of "delete" (which they should,
+but there are probably lots of buggy programs out there). For the
+moment, the space savings seems not worthwhile, especially considering
+that the Boehm GC is already quite space competitive with other
+malloc's.
+
+
+Given a pointer o to the base of an object:
+
+Descriptor* DESCRIPTOR (void* o)
+ returns a pointer to the descriptor for o.
+
+The implementation of descriptors relies on the fact that the GC
+implementation allocates objects in units of the machine's natural
+word size (e.g. 32 bits on a SPARC, 64 bits on an Alpha).
+
+**************************************************************************/
+
+typedef struct Descriptor {
+ unsigned has_weak_pointers: 1;
+ unsigned has_finalizer: 1;
+ unsigned element_size: BITSPERBYTE * sizeof( unsigned ) - 2;
+} Descriptor;
+
+#define DESCRIPTOR( o ) \
+ ((Descriptor*) ((char*)(o) + GC_size( o ) - sizeof( Descriptor )))
+
+
+/**************************************************************************
+
+Implementations of global operator new() and operator delete()
+
+***************************************************************************/
+
+
+void* __builtin_new( size )
+ size_t size;
+ /*
+ For non-gc non-array types, the compiler generates calls to
+ __builtin_new, which allocates non-collected storage via
+ GC_MALLOC_UNCOLLECTABLE. This ensures that the non-collected
+ storage will be part of the collector's root set, required by the
+ Ellis/Detlefs semantics. */
+{
+ vfp handler = __new_handler ? __new_handler : __default_new_handler;
+
+ while (1) {
+ void* o = GC_MALLOC_UNCOLLECTABLE( size + sizeof( Descriptor ) );
+ if (o != 0) return o;
+ (*handler) ();}}
+
+
+void* __builtin_vec_new( size )
+ size_t size;
+ /*
+ For non-gc array types, the compiler generates calls to
+ __builtin_vec_new. */
+{
+ return __builtin_new( size );}
+
+
+void* __builtin_new_gc( size )
+ size_t size;
+ /*
+ For gc non-array types, the compiler generates calls to
+ __builtin_new_gc, which allocates collected storage via
+ GC_MALLOC. */
+{
+ vfp handler = __new_handler ? __new_handler : __default_new_handler;
+
+ while (1) {
+ void* o = GC_MALLOC( size + sizeof( Descriptor ) );
+ if (o != 0) return o;
+ (*handler) ();}}
+
+
+void* __builtin_new_gc_a( size )
+ size_t size;
+ /*
+ For non-pointer-containing gc non-array types, the compiler
+ generates calls to __builtin_new_gc_a, which allocates collected
+ storage via GC_MALLOC_ATOMIC. */
+{
+ vfp handler = __new_handler ? __new_handler : __default_new_handler;
+
+ while (1) {
+ void* o = GC_MALLOC_ATOMIC( size + sizeof( Descriptor ) );
+ if (o != 0) return o;
+ (*handler) ();}}
+
+
+void* __builtin_vec_new_gc( size )
+ size_t size;
+ /*
+ For gc array types, the compiler generates calls to
+ __builtin_vec_new_gc. */
+{
+ return __builtin_new_gc( size );}
+
+
+void* __builtin_vec_new_gc_a( size )
+ size_t size;
+ /*
+ For non-pointer-containing gc array types, the compiler generates
+ calls to __builtin_vec_new_gc_a. */
+{
+ return __builtin_new_gc_a( size );}
+
+
+static void call_destructor( o, data )
+ void* o;
+ void* data;
+ /*
+ call_destructor is the GC finalizer proc registered for non-array
+ gc objects with destructors. Its client data is the destructor
+ proc, which it calls with the magic integer 2, a special flag
+ obeying the compiler convention for destructors. */
+{
+ ((destructor_proc) data)( o, 2 );}
+
+
+void* __builtin_new_gc_dtor( o, d )
+ void* o;
+ destructor_proc d;
+ /*
+ The compiler generates a call to __builtin_new_gc_dtor to register
+ the destructor "d" of a non-array gc object "o" as a GC finalizer.
+ The destructor is registered via
+ GC_REGISTER_FINALIZER_IGNORE_SELF, which causes the collector to
+ ignore pointers from the object to itself when determining when
+ the object can be finalized. This is necessary due to the self
+ pointers used in the internal representation of multiply-inherited
+ objects. */
+{
+ Descriptor* desc = DESCRIPTOR( o );
+
+ GC_REGISTER_FINALIZER_IGNORE_SELF( o, call_destructor, d, 0, 0 );
+ desc->has_finalizer = 1;}
+
+
+static void call_array_destructor( o, data )
+ void* o;
+ void* data;
+ /*
+ call_array_destructor is the GC finalizer proc registered for gc
+ array objects whose elements have destructors. Its client data is
+ the destructor proc. It iterates through the elements of the
+ array in reverse order, calling the destructor on each. */
+{
+ int num = NUM_ARRAY_ELEMENTS( o );
+ Descriptor* desc = DESCRIPTOR( o );
+ size_t size = desc->element_size;
+ char* first_p = FIRST_ELEMENT_P( o );
+ char* p = first_p + (num - 1) * size;
+
+ if (num > 0) {
+ while (1) {
+ ((destructor_proc) data)( p, 2 );
+ if (p == first_p) break;
+ p -= size;}}}
+
+
+void* __builtin_vec_new_gc_dtor( first_elem, d, element_size )
+ void* first_elem;
+ destructor_proc d;
+ size_t element_size;
+ /*
+ The compiler generates a call to __builtin_vec_new_gc_dtor to
+ register the destructor "d" of a gc array object as a GC
+ finalizer. "first_elem" points to the first element of the array,
+ *not* the beginning of the object (this makes the generated call
+ to this function smaller). The elements of the array are of size
+ "element_size". The destructor is registered as in
+ _builtin_new_gc_dtor. */
+{
+ void* o = (char*) first_elem - sizeof( BI_header );
+ Descriptor* desc = DESCRIPTOR( o );
+
+ GC_REGISTER_FINALIZER_IGNORE_SELF( o, call_array_destructor, d, 0, 0 );
+ desc->element_size = element_size;
+ desc->has_finalizer = 1;}
+
+
+void __builtin_delete( o )
+ void* o;
+ /*
+ The compiler generates calls to __builtin_delete for operator
+ delete(). The GC currently requires that any registered
+ finalizers be unregistered before explicitly freeing an object.
+ If the object has any weak pointers referencing it, we can't
+ actually free it now. */
+{
+ if (o != 0) {
+ Descriptor* desc = DESCRIPTOR( o );
+ if (desc->has_finalizer) GC_REGISTER_FINALIZER( o, 0, 0, 0, 0 );
+ if (! desc->has_weak_pointers) GC_FREE( o );}}
+
+
+void __builtin_vec_delete( o )
+ void* o;
+ /*
+ The compiler generates calls to __builitn_vec_delete for operator
+ delete[](). */
+{
+ __builtin_delete( o );}
+
+
+/**************************************************************************
+
+Implementations of the template class WeakPointer from WeakPointer.h
+
+***************************************************************************/
+
+typedef struct WeakPointer {
+ void* pointer;
+} WeakPointer;
+
+
+void* _WeakPointer_New( t )
+ void* t;
+{
+ if (t == 0) {
+ return 0;}
+ else {
+ void* base = GC_base( t );
+ WeakPointer* wp =
+ (WeakPointer*) GC_MALLOC_ATOMIC( sizeof( WeakPointer ) );
+ Descriptor* desc = DESCRIPTOR( base );
+
+ wp->pointer = t;
+ desc->has_weak_pointers = 1;
+ GC_general_register_disappearing_link( &wp->pointer, base );
+ return wp;}}
+
+
+static void* PointerWithLock( wp )
+ WeakPointer* wp;
+{
+ if (wp == 0 || wp->pointer == 0) {
+ return 0;}
+ else {
+ return (void*) wp->pointer;}}
+
+
+void* _WeakPointer_Pointer( wp )
+ WeakPointer* wp;
+{
+ return (void*) GC_call_with_alloc_lock( PointerWithLock, wp );}
+
+
+typedef struct EqualClosure {
+ WeakPointer* wp1;
+ WeakPointer* wp2;
+} EqualClosure;
+
+
+static void* EqualWithLock( ec )
+ EqualClosure* ec;
+{
+ if (ec->wp1 == 0 || ec->wp2 == 0) {
+ return (void*) (ec->wp1 == ec->wp2);}
+ else {
+ return (void*) (ec->wp1->pointer == ec->wp2->pointer);}}
+
+
+int _WeakPointer_Equal( wp1, wp2 )
+ WeakPointer* wp1;
+ WeakPointer* wp2;
+{
+ EqualClosure ec;
+
+ ec.wp1 = wp1;
+ ec.wp2 = wp2;
+ return (int) GC_call_with_alloc_lock( EqualWithLock, &ec );}
+
+
+int _WeakPointer_Hash( wp )
+ WeakPointer* wp;
+{
+ return (int) _WeakPointer_Pointer( wp );}
+
+
+/**************************************************************************
+
+Implementations of the template class CleanUp from WeakPointer.h
+
+***************************************************************************/
+
+typedef struct Closure {
+ void (*c) PROTO(( void* d, void* t ));
+ ptrdiff_t t_offset;
+ void* d;
+} Closure;
+
+
+static void _CleanUp_CallClosure( obj, data )
+ void* obj;
+ void* data;
+{
+ Closure* closure = (Closure*) data;
+ closure->c( closure->d, (char*) obj + closure->t_offset );}
+
+
+void _CleanUp_Set( t, c, d )
+ void* t;
+ void (*c) PROTO(( void* d, void* t ));
+ void* d;
+{
+ void* base = GC_base( t );
+ Descriptor* desc = DESCRIPTOR( t );
+
+ if (c == 0) {
+ GC_REGISTER_FINALIZER_IGNORE_SELF( base, 0, 0, 0, 0 );
+ desc->has_finalizer = 0;}
+ else {
+ Closure* closure = (Closure*) GC_MALLOC( sizeof( Closure ) );
+ closure->c = c;
+ closure->t_offset = (char*) t - (char*) base;
+ closure->d = d;
+ GC_REGISTER_FINALIZER_IGNORE_SELF( base, _CleanUp_CallClosure,
+ closure, 0, 0 );
+ desc->has_finalizer = 1;}}
+
+
+void _CleanUp_Call( t )
+ void* t;
+{
+ /* ? Aren't we supposed to deactivate weak pointers to t too?
+ Why? */
+ void* base = GC_base( t );
+ void* d;
+ GC_finalization_proc f;
+
+ GC_REGISTER_FINALIZER( base, 0, 0, &f, &d );
+ f( base, d );}
+
+
+typedef struct QueueElem {
+ void* o;
+ GC_finalization_proc f;
+ void* d;
+ struct QueueElem* next;
+} QueueElem;
+
+
+void* _CleanUp_Queue_NewHead()
+{
+ return GC_MALLOC( sizeof( QueueElem ) );}
+
+
+static void _CleanUp_Queue_Enqueue( obj, data )
+ void* obj;
+ void* data;
+{
+ QueueElem* q = (QueueElem*) data;
+ QueueElem* head = q->next;
+
+ q->o = obj;
+ q->next = head->next;
+ head->next = q;}
+
+
+void _CleanUp_Queue_Set( h, t )
+ void* h;
+ void* t;
+{
+ QueueElem* head = (QueueElem*) h;
+ void* base = GC_base( t );
+ void* d;
+ GC_finalization_proc f;
+ QueueElem* q = (QueueElem*) GC_MALLOC( sizeof( QueueElem ) );
+
+ GC_REGISTER_FINALIZER( base, _CleanUp_Queue_Enqueue, q, &f, &d );
+ q->f = f;
+ q->d = d;
+ q->next = head;}
+
+
+int _CleanUp_Queue_Call( h )
+ void* h;
+{
+ QueueElem* head = (QueueElem*) h;
+ QueueElem* q = head->next;
+
+ if (q == 0) {
+ return 0;}
+ else {
+ head->next = q->next;
+ q->next = 0;
+ if (q->f != 0) q->f( q->o, q->d );
+ return 1;}}
+
+
+
Added: llvm-gcc-4.2/trunk/boehm-gc/gcj_mlc.c
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/gcj_mlc.c?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/gcj_mlc.c (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/gcj_mlc.c Thu Nov 8 16:56:19 2007
@@ -0,0 +1,321 @@
+/*
+ * Copyright (c) 1991-1994 by Xerox Corporation. All rights reserved.
+ * Copyright (c) 1999 by Hewlett-Packard Company. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ *
+ */
+/* Boehm, July 31, 1995 5:02 pm PDT */
+
+/*
+ * This is an allocator interface tuned for gcj (the GNU static
+ * java compiler).
+ *
+ * Each allocated object has a pointer in its first word to a vtable,
+ * which for our purposes is simply a structure describing the type of
+ * the object.
+ * This descriptor structure contains a GC marking descriptor at offset
+ * MARK_DESCR_OFFSET.
+ *
+ * It is hoped that this interface may also be useful for other systems,
+ * possibly with some tuning of the constants. But the immediate goal
+ * is to get better gcj performance.
+ *
+ * We assume:
+ * 1) We have an ANSI conforming C compiler.
+ * 2) Counting on explicit initialization of this interface is OK.
+ * 3) FASTLOCK is not a significant win.
+ */
+
+#include "private/gc_pmark.h"
+#include "gc_gcj.h"
+#include "private/dbg_mlc.h"
+
+#ifdef GC_GCJ_SUPPORT
+
+GC_bool GC_gcj_malloc_initialized = FALSE;
+
+int GC_gcj_kind; /* Object kind for objects with descriptors */
+ /* in "vtable". */
+int GC_gcj_debug_kind; /* The kind of objects that is always marked */
+ /* with a mark proc call. */
+
+ptr_t * GC_gcjobjfreelist;
+ptr_t * GC_gcjdebugobjfreelist;
+
+/* Caller does not hold allocation lock. */
+void GC_init_gcj_malloc(int mp_index, void * /* really GC_mark_proc */mp)
+{
+ register int i;
+ GC_bool ignore_gcj_info;
+ DCL_LOCK_STATE;
+
+ GC_init(); /* In case it's not already done. */
+ DISABLE_SIGNALS();
+ LOCK();
+ if (GC_gcj_malloc_initialized) {
+ UNLOCK();
+ ENABLE_SIGNALS();
+ return;
+ }
+ GC_gcj_malloc_initialized = TRUE;
+ ignore_gcj_info = (0 != GETENV("GC_IGNORE_GCJ_INFO"));
+# ifdef CONDPRINT
+ if (GC_print_stats && ignore_gcj_info) {
+ GC_printf0("Gcj-style type information is disabled!\n");
+ }
+# endif
+ GC_ASSERT(GC_mark_procs[mp_index] == (GC_mark_proc)0); /* unused */
+ GC_mark_procs[mp_index] = (GC_mark_proc)mp;
+ if (mp_index >= GC_n_mark_procs) ABORT("GC_init_gcj_malloc: bad index");
+ /* Set up object kind gcj-style indirect descriptor. */
+ GC_gcjobjfreelist = (ptr_t *)GC_new_free_list_inner();
+ if (ignore_gcj_info) {
+ /* Use a simple length-based descriptor, thus forcing a fully */
+ /* conservative scan. */
+ GC_gcj_kind = GC_new_kind_inner((void **)GC_gcjobjfreelist,
+ (0 | GC_DS_LENGTH),
+ TRUE, TRUE);
+ } else {
+ GC_gcj_kind = GC_new_kind_inner(
+ (void **)GC_gcjobjfreelist,
+ (((word)(-MARK_DESCR_OFFSET - GC_INDIR_PER_OBJ_BIAS))
+ | GC_DS_PER_OBJECT),
+ FALSE, TRUE);
+ }
+ /* Set up object kind for objects that require mark proc call. */
+ if (ignore_gcj_info) {
+ GC_gcj_debug_kind = GC_gcj_kind;
+ GC_gcjdebugobjfreelist = GC_gcjobjfreelist;
+ } else {
+ GC_gcjdebugobjfreelist = (ptr_t *)GC_new_free_list_inner();
+ GC_gcj_debug_kind = GC_new_kind_inner(
+ (void **)GC_gcjdebugobjfreelist,
+ GC_MAKE_PROC(mp_index,
+ 1 /* allocated with debug info */),
+ FALSE, TRUE);
+ }
+ UNLOCK();
+ ENABLE_SIGNALS();
+}
+
+ptr_t GC_clear_stack();
+
+#define GENERAL_MALLOC(lb,k) \
+ (GC_PTR)GC_clear_stack(GC_generic_malloc_inner((word)lb, k))
+
+#define GENERAL_MALLOC_IOP(lb,k) \
+ (GC_PTR)GC_clear_stack(GC_generic_malloc_inner_ignore_off_page(lb, k))
+
+/* We need a mechanism to release the lock and invoke finalizers. */
+/* We don't really have an opportunity to do this on a rarely executed */
+/* path on which the lock is not held. Thus we check at a */
+/* rarely executed point at which it is safe to release the lock. */
+/* We do this even where we could just call GC_INVOKE_FINALIZERS, */
+/* since it's probably cheaper and certainly more uniform. */
+/* FIXME - Consider doing the same elsewhere? */
+static void maybe_finalize()
+{
+ static int last_finalized_no = 0;
+
+ if (GC_gc_no == last_finalized_no) return;
+ if (!GC_is_initialized) return;
+ UNLOCK();
+ GC_INVOKE_FINALIZERS();
+ last_finalized_no = GC_gc_no;
+ LOCK();
+}
+
+/* Allocate an object, clear it, and store the pointer to the */
+/* type structure (vtable in gcj). */
+/* This adds a byte at the end of the object if GC_malloc would.*/
+void * GC_gcj_malloc(size_t lb, void * ptr_to_struct_containing_descr)
+{
+register ptr_t op;
+register ptr_t * opp;
+register word lw;
+DCL_LOCK_STATE;
+
+ if( EXPECT(SMALL_OBJ(lb), 1) ) {
+# ifdef MERGE_SIZES
+ lw = GC_size_map[lb];
+# else
+ lw = ALIGNED_WORDS(lb);
+# endif
+ opp = &(GC_gcjobjfreelist[lw]);
+ LOCK();
+ op = *opp;
+ if(EXPECT(op == 0, 0)) {
+ maybe_finalize();
+ op = (ptr_t)GENERAL_MALLOC((word)lb, GC_gcj_kind);
+ if (0 == op) {
+ UNLOCK();
+ return(GC_oom_fn(lb));
+ }
+# ifdef MERGE_SIZES
+ lw = GC_size_map[lb]; /* May have been uninitialized. */
+# endif
+ } else {
+ *opp = obj_link(op);
+ GC_words_allocd += lw;
+ }
+ *(void **)op = ptr_to_struct_containing_descr;
+ GC_ASSERT(((void **)op)[1] == 0);
+ UNLOCK();
+ } else {
+ LOCK();
+ maybe_finalize();
+ op = (ptr_t)GENERAL_MALLOC((word)lb, GC_gcj_kind);
+ if (0 == op) {
+ UNLOCK();
+ return(GC_oom_fn(lb));
+ }
+ *(void **)op = ptr_to_struct_containing_descr;
+ UNLOCK();
+ }
+ return((GC_PTR) op);
+}
+
+/* Similar to GC_gcj_malloc, but add debug info. This is allocated */
+/* with GC_gcj_debug_kind. */
+GC_PTR GC_debug_gcj_malloc(size_t lb, void * ptr_to_struct_containing_descr,
+ GC_EXTRA_PARAMS)
+{
+ GC_PTR result;
+
+ /* We're careful to avoid extra calls, which could */
+ /* confuse the backtrace. */
+ LOCK();
+ maybe_finalize();
+ result = GC_generic_malloc_inner(lb + DEBUG_BYTES, GC_gcj_debug_kind);
+ if (result == 0) {
+ UNLOCK();
+ GC_err_printf2("GC_debug_gcj_malloc(%ld, 0x%lx) returning NIL (",
+ (unsigned long) lb,
+ (unsigned long) ptr_to_struct_containing_descr);
+ GC_err_puts(s);
+ GC_err_printf1(":%ld)\n", (unsigned long)i);
+ return(GC_oom_fn(lb));
+ }
+ *((void **)((ptr_t)result + sizeof(oh))) = ptr_to_struct_containing_descr;
+ UNLOCK();
+ if (!GC_debugging_started) {
+ GC_start_debugging();
+ }
+ ADD_CALL_CHAIN(result, ra);
+ return (GC_store_debug_info(result, (word)lb, s, (word)i));
+}
+
+/* Similar to GC_gcj_malloc, but the size is in words, and we don't */
+/* adjust it. The size is assumed to be such that it can be */
+/* allocated as a small object. */
+void * GC_gcj_fast_malloc(size_t lw, void * ptr_to_struct_containing_descr)
+{
+ptr_t op;
+ptr_t * opp;
+DCL_LOCK_STATE;
+
+ opp = &(GC_gcjobjfreelist[lw]);
+ LOCK();
+ op = *opp;
+ if( EXPECT(op == 0, 0) ) {
+ maybe_finalize();
+ op = (ptr_t)GC_clear_stack(
+ GC_generic_malloc_words_small_inner(lw, GC_gcj_kind));
+ if (0 == op) {
+ UNLOCK();
+ return GC_oom_fn(WORDS_TO_BYTES(lw));
+ }
+ } else {
+ *opp = obj_link(op);
+ GC_words_allocd += lw;
+ }
+ *(void **)op = ptr_to_struct_containing_descr;
+ UNLOCK();
+ return((GC_PTR) op);
+}
+
+/* And a debugging version of the above: */
+void * GC_debug_gcj_fast_malloc(size_t lw,
+ void * ptr_to_struct_containing_descr,
+ GC_EXTRA_PARAMS)
+{
+ GC_PTR result;
+ size_t lb = WORDS_TO_BYTES(lw);
+
+ /* We clone the code from GC_debug_gcj_malloc, so that we */
+ /* dont end up with extra frames on the stack, which could */
+ /* confuse the backtrace. */
+ LOCK();
+ maybe_finalize();
+ result = GC_generic_malloc_inner(lb + DEBUG_BYTES, GC_gcj_debug_kind);
+ if (result == 0) {
+ UNLOCK();
+ GC_err_printf2("GC_debug_gcj_fast_malloc(%ld, 0x%lx) returning NIL (",
+ (unsigned long) lw,
+ (unsigned long) ptr_to_struct_containing_descr);
+ GC_err_puts(s);
+ GC_err_printf1(":%ld)\n", (unsigned long)i);
+ return GC_oom_fn(WORDS_TO_BYTES(lw));
+ }
+ *((void **)((ptr_t)result + sizeof(oh))) = ptr_to_struct_containing_descr;
+ UNLOCK();
+ if (!GC_debugging_started) {
+ GC_start_debugging();
+ }
+ ADD_CALL_CHAIN(result, ra);
+ return (GC_store_debug_info(result, (word)lb, s, (word)i));
+}
+
+void * GC_gcj_malloc_ignore_off_page(size_t lb,
+ void * ptr_to_struct_containing_descr)
+{
+register ptr_t op;
+register ptr_t * opp;
+register word lw;
+DCL_LOCK_STATE;
+
+ if( SMALL_OBJ(lb) ) {
+# ifdef MERGE_SIZES
+ lw = GC_size_map[lb];
+# else
+ lw = ALIGNED_WORDS(lb);
+# endif
+ opp = &(GC_gcjobjfreelist[lw]);
+ LOCK();
+ if( (op = *opp) == 0 ) {
+ maybe_finalize();
+ op = (ptr_t)GENERAL_MALLOC_IOP(lb, GC_gcj_kind);
+# ifdef MERGE_SIZES
+ lw = GC_size_map[lb]; /* May have been uninitialized. */
+# endif
+ } else {
+ *opp = obj_link(op);
+ GC_words_allocd += lw;
+ }
+ *(void **)op = ptr_to_struct_containing_descr;
+ UNLOCK();
+ } else {
+ LOCK();
+ maybe_finalize();
+ op = (ptr_t)GENERAL_MALLOC_IOP(lb, GC_gcj_kind);
+ if (0 != op) {
+ *(void **)op = ptr_to_struct_containing_descr;
+ }
+ UNLOCK();
+ }
+ return((GC_PTR) op);
+}
+
+#else
+
+char GC_no_gcj_support;
+
+#endif /* GC_GCJ_SUPPORT */
Added: llvm-gcc-4.2/trunk/boehm-gc/gcname.c
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/gcname.c?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/gcname.c (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/gcname.c Thu Nov 8 16:56:19 2007
@@ -0,0 +1,13 @@
+#include <stdio.h>
+#include "version.h"
+
+int main()
+{
+ if (GC_ALPHA_VERSION == GC_NOT_ALPHA) {
+ printf("gc%d.%d", GC_VERSION_MAJOR, GC_VERSION_MINOR);
+ } else {
+ printf("gc%d.%dalpha%d", GC_VERSION_MAJOR,
+ GC_VERSION_MINOR, GC_ALPHA_VERSION);
+ }
+ return 0;
+}
Added: llvm-gcc-4.2/trunk/boehm-gc/headers.c
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/headers.c?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/headers.c (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/headers.c Thu Nov 8 16:56:19 2007
@@ -0,0 +1,358 @@
+/*
+ * Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
+ * Copyright (c) 1991-1994 by Xerox Corporation. All rights reserved.
+ * Copyright (c) 1996 by Silicon Graphics. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ */
+
+/*
+ * This implements:
+ * 1. allocation of heap block headers
+ * 2. A map from addresses to heap block addresses to heap block headers
+ *
+ * Access speed is crucial. We implement an index structure based on a 2
+ * level tree.
+ */
+
+# include "private/gc_priv.h"
+
+bottom_index * GC_all_bottom_indices = 0;
+ /* Pointer to first (lowest addr) */
+ /* bottom_index. */
+
+bottom_index * GC_all_bottom_indices_end = 0;
+ /* Pointer to last (highest addr) */
+ /* bottom_index. */
+
+/* Non-macro version of header location routine */
+hdr * GC_find_header(h)
+ptr_t h;
+{
+# ifdef HASH_TL
+ register hdr * result;
+ GET_HDR(h, result);
+ return(result);
+# else
+ return(HDR_INNER(h));
+# endif
+}
+
+/* Routines to dynamically allocate collector data structures that will */
+/* never be freed. */
+
+static ptr_t scratch_free_ptr = 0;
+
+/* GC_scratch_last_end_ptr is end point of last obtained scratch area. */
+/* GC_scratch_end_ptr is end point of current scratch area. */
+
+ptr_t GC_scratch_alloc(bytes)
+register word bytes;
+{
+ register ptr_t result = scratch_free_ptr;
+
+# ifdef ALIGN_DOUBLE
+# define GRANULARITY (2 * sizeof(word))
+# else
+# define GRANULARITY sizeof(word)
+# endif
+ bytes += GRANULARITY-1;
+ bytes &= ~(GRANULARITY-1);
+ scratch_free_ptr += bytes;
+ if (scratch_free_ptr <= GC_scratch_end_ptr) {
+ return(result);
+ }
+ {
+ word bytes_to_get = MINHINCR * HBLKSIZE;
+
+ if (bytes_to_get <= bytes) {
+ /* Undo the damage, and get memory directly */
+ bytes_to_get = bytes;
+# ifdef USE_MMAP
+ bytes_to_get += GC_page_size - 1;
+ bytes_to_get &= ~(GC_page_size - 1);
+# endif
+ result = (ptr_t)GET_MEM(bytes_to_get);
+ scratch_free_ptr -= bytes;
+ GC_scratch_last_end_ptr = result + bytes;
+ return(result);
+ }
+ result = (ptr_t)GET_MEM(bytes_to_get);
+ if (result == 0) {
+# ifdef PRINTSTATS
+ GC_printf0("Out of memory - trying to allocate less\n");
+# endif
+ scratch_free_ptr -= bytes;
+ bytes_to_get = bytes;
+# ifdef USE_MMAP
+ bytes_to_get += GC_page_size - 1;
+ bytes_to_get &= ~(GC_page_size - 1);
+# endif
+ return((ptr_t)GET_MEM(bytes_to_get));
+ }
+ scratch_free_ptr = result;
+ GC_scratch_end_ptr = scratch_free_ptr + bytes_to_get;
+ GC_scratch_last_end_ptr = GC_scratch_end_ptr;
+ return(GC_scratch_alloc(bytes));
+ }
+}
+
+static hdr * hdr_free_list = 0;
+
+/* Return an uninitialized header */
+static hdr * alloc_hdr()
+{
+ register hdr * result;
+
+ if (hdr_free_list == 0) {
+ result = (hdr *) GC_scratch_alloc((word)(sizeof(hdr)));
+ } else {
+ result = hdr_free_list;
+ hdr_free_list = (hdr *) (result -> hb_next);
+ }
+ return(result);
+}
+
+static void free_hdr(hhdr)
+hdr * hhdr;
+{
+ hhdr -> hb_next = (struct hblk *) hdr_free_list;
+ hdr_free_list = hhdr;
+}
+
+hdr * GC_invalid_header;
+
+#ifdef USE_HDR_CACHE
+ word GC_hdr_cache_hits = 0;
+ word GC_hdr_cache_misses = 0;
+#endif
+
+void GC_init_headers()
+{
+ register unsigned i;
+
+ GC_all_nils = (bottom_index *)GC_scratch_alloc((word)sizeof(bottom_index));
+ BZERO(GC_all_nils, sizeof(bottom_index));
+ for (i = 0; i < TOP_SZ; i++) {
+ GC_top_index[i] = GC_all_nils;
+ }
+ GC_invalid_header = alloc_hdr();
+ GC_invalidate_map(GC_invalid_header);
+}
+
+/* Make sure that there is a bottom level index block for address addr */
+/* Return FALSE on failure. */
+static GC_bool get_index(addr)
+word addr;
+{
+ word hi = (word)(addr) >> (LOG_BOTTOM_SZ + LOG_HBLKSIZE);
+ bottom_index * r;
+ bottom_index * p;
+ bottom_index ** prev;
+ bottom_index *pi;
+
+# ifdef HASH_TL
+ unsigned i = TL_HASH(hi);
+ bottom_index * old;
+
+ old = p = GC_top_index[i];
+ while(p != GC_all_nils) {
+ if (p -> key == hi) return(TRUE);
+ p = p -> hash_link;
+ }
+ r = (bottom_index*)GC_scratch_alloc((word)(sizeof (bottom_index)));
+ if (r == 0) return(FALSE);
+ BZERO(r, sizeof (bottom_index));
+ r -> hash_link = old;
+ GC_top_index[i] = r;
+# else
+ if (GC_top_index[hi] != GC_all_nils) return(TRUE);
+ r = (bottom_index*)GC_scratch_alloc((word)(sizeof (bottom_index)));
+ if (r == 0) return(FALSE);
+ GC_top_index[hi] = r;
+ BZERO(r, sizeof (bottom_index));
+# endif
+ r -> key = hi;
+ /* Add it to the list of bottom indices */
+ prev = &GC_all_bottom_indices; /* pointer to p */
+ pi = 0; /* bottom_index preceding p */
+ while ((p = *prev) != 0 && p -> key < hi) {
+ pi = p;
+ prev = &(p -> asc_link);
+ }
+ r -> desc_link = pi;
+ if (0 == p) {
+ GC_all_bottom_indices_end = r;
+ } else {
+ p -> desc_link = r;
+ }
+ r -> asc_link = p;
+ *prev = r;
+ return(TRUE);
+}
+
+/* Install a header for block h. */
+/* The header is uninitialized. */
+/* Returns the header or 0 on failure. */
+struct hblkhdr * GC_install_header(h)
+register struct hblk * h;
+{
+ hdr * result;
+
+ if (!get_index((word) h)) return(0);
+ result = alloc_hdr();
+ SET_HDR(h, result);
+# ifdef USE_MUNMAP
+ result -> hb_last_reclaimed = GC_gc_no;
+# endif
+ return(result);
+}
+
+/* Set up forwarding counts for block h of size sz */
+GC_bool GC_install_counts(h, sz)
+register struct hblk * h;
+register word sz; /* bytes */
+{
+ register struct hblk * hbp;
+ register int i;
+
+ for (hbp = h; (char *)hbp < (char *)h + sz; hbp += BOTTOM_SZ) {
+ if (!get_index((word) hbp)) return(FALSE);
+ }
+ if (!get_index((word)h + sz - 1)) return(FALSE);
+ for (hbp = h + 1; (char *)hbp < (char *)h + sz; hbp += 1) {
+ i = HBLK_PTR_DIFF(hbp, h);
+ SET_HDR(hbp, (hdr *)(i > MAX_JUMP? MAX_JUMP : i));
+ }
+ return(TRUE);
+}
+
+/* Remove the header for block h */
+void GC_remove_header(h)
+register struct hblk * h;
+{
+ hdr ** ha;
+
+ GET_HDR_ADDR(h, ha);
+ free_hdr(*ha);
+ *ha = 0;
+}
+
+/* Remove forwarding counts for h */
+void GC_remove_counts(h, sz)
+register struct hblk * h;
+register word sz; /* bytes */
+{
+ register struct hblk * hbp;
+
+ for (hbp = h+1; (char *)hbp < (char *)h + sz; hbp += 1) {
+ SET_HDR(hbp, 0);
+ }
+}
+
+/* Apply fn to all allocated blocks */
+/*VARARGS1*/
+void GC_apply_to_all_blocks(fn, client_data)
+void (*fn) GC_PROTO((struct hblk *h, word client_data));
+word client_data;
+{
+ register int j;
+ register bottom_index * index_p;
+
+ for (index_p = GC_all_bottom_indices; index_p != 0;
+ index_p = index_p -> asc_link) {
+ for (j = BOTTOM_SZ-1; j >= 0;) {
+ if (!IS_FORWARDING_ADDR_OR_NIL(index_p->index[j])) {
+ if (index_p->index[j]->hb_map != GC_invalid_map) {
+ (*fn)(((struct hblk *)
+ (((index_p->key << LOG_BOTTOM_SZ) + (word)j)
+ << LOG_HBLKSIZE)),
+ client_data);
+ }
+ j--;
+ } else if (index_p->index[j] == 0) {
+ j--;
+ } else {
+ j -= (word)(index_p->index[j]);
+ }
+ }
+ }
+}
+
+/* Get the next valid block whose address is at least h */
+/* Return 0 if there is none. */
+struct hblk * GC_next_used_block(h)
+struct hblk * h;
+{
+ register bottom_index * bi;
+ register word j = ((word)h >> LOG_HBLKSIZE) & (BOTTOM_SZ-1);
+
+ GET_BI(h, bi);
+ if (bi == GC_all_nils) {
+ register word hi = (word)h >> (LOG_BOTTOM_SZ + LOG_HBLKSIZE);
+ bi = GC_all_bottom_indices;
+ while (bi != 0 && bi -> key < hi) bi = bi -> asc_link;
+ j = 0;
+ }
+ while(bi != 0) {
+ while (j < BOTTOM_SZ) {
+ hdr * hhdr = bi -> index[j];
+ if (IS_FORWARDING_ADDR_OR_NIL(hhdr)) {
+ j++;
+ } else {
+ if (hhdr->hb_map != GC_invalid_map) {
+ return((struct hblk *)
+ (((bi -> key << LOG_BOTTOM_SZ) + j)
+ << LOG_HBLKSIZE));
+ } else {
+ j += divHBLKSZ(hhdr -> hb_sz);
+ }
+ }
+ }
+ j = 0;
+ bi = bi -> asc_link;
+ }
+ return(0);
+}
+
+/* Get the last (highest address) block whose address is */
+/* at most h. Return 0 if there is none. */
+/* Unlike the above, this may return a free block. */
+struct hblk * GC_prev_block(h)
+struct hblk * h;
+{
+ register bottom_index * bi;
+ register signed_word j = ((word)h >> LOG_HBLKSIZE) & (BOTTOM_SZ-1);
+
+ GET_BI(h, bi);
+ if (bi == GC_all_nils) {
+ register word hi = (word)h >> (LOG_BOTTOM_SZ + LOG_HBLKSIZE);
+ bi = GC_all_bottom_indices_end;
+ while (bi != 0 && bi -> key > hi) bi = bi -> desc_link;
+ j = BOTTOM_SZ - 1;
+ }
+ while(bi != 0) {
+ while (j >= 0) {
+ hdr * hhdr = bi -> index[j];
+ if (0 == hhdr) {
+ --j;
+ } else if (IS_FORWARDING_ADDR_OR_NIL(hhdr)) {
+ j -= (signed_word)hhdr;
+ } else {
+ return((struct hblk *)
+ (((bi -> key << LOG_BOTTOM_SZ) + j)
+ << LOG_HBLKSIZE));
+ }
+ }
+ j = BOTTOM_SZ - 1;
+ bi = bi -> desc_link;
+ }
+ return(0);
+}
Added: llvm-gcc-4.2/trunk/boehm-gc/hpux_test_and_clear.s
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/hpux_test_and_clear.s?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/hpux_test_and_clear.s (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/hpux_test_and_clear.s Thu Nov 8 16:56:19 2007
@@ -0,0 +1,21 @@
+ .SPACE $PRIVATE$
+ .SUBSPA $DATA$,QUAD=1,ALIGN=8,ACCESS=31
+ .SUBSPA $BSS$,QUAD=1,ALIGN=8,ACCESS=31,ZERO,SORT=82
+ .SPACE $TEXT$
+ .SUBSPA $LIT$,QUAD=0,ALIGN=8,ACCESS=44
+ .SUBSPA $CODE$,QUAD=0,ALIGN=8,ACCESS=44,CODE_ONLY
+ .IMPORT $global$,DATA
+ .IMPORT $$dyncall,MILLICODE
+ .SPACE $TEXT$
+ .SUBSPA $CODE$
+
+ .align 4
+ .EXPORT GC_test_and_clear,ENTRY,PRIV_LEV=3,ARGW0=GR,RTNVAL=GR
+GC_test_and_clear
+ .PROC
+ .CALLINFO FRAME=0,NO_CALLS
+ .ENTRY
+ ldcw,co (%r26),%r28
+ bv,n 0(%r2)
+ .EXIT
+ .PROCEND
Added: llvm-gcc-4.2/trunk/boehm-gc/ia64_save_regs_in_stack.s
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/ia64_save_regs_in_stack.s?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/ia64_save_regs_in_stack.s (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/ia64_save_regs_in_stack.s Thu Nov 8 16:56:19 2007
@@ -0,0 +1,12 @@
+ .text
+ .align 16
+ .global GC_save_regs_in_stack
+ .proc GC_save_regs_in_stack
+GC_save_regs_in_stack:
+ .body
+ flushrs
+ ;;
+ mov r8=ar.bsp
+ br.ret.sptk.few rp
+ .endp GC_save_regs_in_stack
+
Added: llvm-gcc-4.2/trunk/boehm-gc/if_mach.c
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/if_mach.c?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/if_mach.c (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/if_mach.c Thu Nov 8 16:56:19 2007
@@ -0,0 +1,28 @@
+/* Conditionally execute a command based on machine and OS from gcconfig.h */
+
+# include "private/gcconfig.h"
+# include <stdio.h>
+# include <string.h>
+# include <unistd.h>
+
+int main(argc, argv, envp)
+int argc;
+char ** argv;
+char ** envp;
+{
+ if (argc < 4) goto Usage;
+ if (strcmp(MACH_TYPE, argv[1]) != 0) return(0);
+ if (strcmp(OS_TYPE, "") != 0 && strcmp(argv[2], "") != 0
+ && strcmp(OS_TYPE, argv[2]) != 0) return(0);
+ fprintf(stderr, "^^^^Starting command^^^^\n");
+ fflush(stdout);
+ execvp(argv[3], argv+3);
+ perror("Couldn't execute");
+
+Usage:
+ fprintf(stderr, "Usage: %s mach_type os_type command\n", argv[0]);
+ fprintf(stderr, "Currently mach_type = %s, os_type = %s\n",
+ MACH_TYPE, OS_TYPE);
+ return(1);
+}
+
Added: llvm-gcc-4.2/trunk/boehm-gc/if_not_there.c
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/if_not_there.c?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/if_not_there.c (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/if_not_there.c Thu Nov 8 16:56:19 2007
@@ -0,0 +1,41 @@
+/* Conditionally execute a command based if the file argv[1] doesn't exist */
+/* Except for execvp, we stick to ANSI C. */
+# include "private/gcconfig.h"
+# include <stdio.h>
+# include <stdlib.h>
+# include <unistd.h>
+#ifdef __DJGPP__
+#include <dirent.h>
+#endif /* __DJGPP__ */
+
+int main(argc, argv, envp)
+int argc;
+char ** argv;
+char ** envp;
+{
+ FILE * f;
+#ifdef __DJGPP__
+ DIR * d;
+#endif /* __DJGPP__ */
+ if (argc < 3) goto Usage;
+ if ((f = fopen(argv[1], "rb")) != 0
+ || (f = fopen(argv[1], "r")) != 0) {
+ fclose(f);
+ return(0);
+ }
+#ifdef __DJGPP__
+ if ((d = opendir(argv[1])) != 0) {
+ closedir(d);
+ return(0);
+ }
+#endif
+ printf("^^^^Starting command^^^^\n");
+ fflush(stdout);
+ execvp(argv[2], argv+2);
+ exit(1);
+
+Usage:
+ fprintf(stderr, "Usage: %s file_name command\n", argv[0]);
+ return(1);
+}
+
Added: llvm-gcc-4.2/trunk/boehm-gc/include/Makefile.am
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/Makefile.am?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/Makefile.am (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/Makefile.am Thu Nov 8 16:56:19 2007
@@ -0,0 +1,7 @@
+AUTOMAKE_OPTIONS = foreign
+
+noinst_HEADERS = gc.h gc_backptr.h gc_local_alloc.h \
+ gc_pthread_redirects.h gc_cpp.h
+
+
+
Added: llvm-gcc-4.2/trunk/boehm-gc/include/Makefile.in
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/Makefile.in?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/Makefile.in (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/Makefile.in Thu Nov 8 16:56:19 2007
@@ -0,0 +1,421 @@
+# Makefile.in generated by automake 1.9.6 from Makefile.am.
+# @configure_input@
+
+# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
+# 2003, 2004, 2005 Free Software Foundation, Inc.
+# This Makefile.in is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
+# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
+# PARTICULAR PURPOSE.
+
+ at SET_MAKE@
+
+srcdir = @srcdir@
+top_srcdir = @top_srcdir@
+VPATH = @srcdir@
+pkgdatadir = $(datadir)/@PACKAGE@
+pkglibdir = $(libdir)/@PACKAGE@
+pkgincludedir = $(includedir)/@PACKAGE@
+top_builddir = ..
+am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
+INSTALL = @INSTALL@
+install_sh_DATA = $(install_sh) -c -m 644
+install_sh_PROGRAM = $(install_sh) -c
+install_sh_SCRIPT = $(install_sh) -c
+INSTALL_HEADER = $(INSTALL_DATA)
+transform = $(program_transform_name)
+NORMAL_INSTALL = :
+PRE_INSTALL = :
+POST_INSTALL = :
+NORMAL_UNINSTALL = :
+PRE_UNINSTALL = :
+POST_UNINSTALL = :
+build_triplet = @build@
+host_triplet = @host@
+target_triplet = @target@
+subdir = include
+DIST_COMMON = $(noinst_HEADERS) $(srcdir)/Makefile.am \
+ $(srcdir)/Makefile.in $(srcdir)/gc_config.h.in \
+ $(srcdir)/gc_ext_config.h.in
+ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
+am__aclocal_m4_deps = $(top_srcdir)/../config/acx.m4 \
+ $(top_srcdir)/../config/depstand.m4 \
+ $(top_srcdir)/../config/lead-dot.m4 \
+ $(top_srcdir)/../config/multi.m4 \
+ $(top_srcdir)/../config/no-executables.m4 \
+ $(top_srcdir)/../libtool.m4 $(top_srcdir)/configure.ac
+am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
+ $(ACLOCAL_M4)
+CONFIG_HEADER = gc_config.h gc_ext_config.h
+CONFIG_CLEAN_FILES =
+SOURCES =
+DIST_SOURCES =
+HEADERS = $(noinst_HEADERS)
+ETAGS = etags
+CTAGS = ctags
+DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
+ACLOCAL = @ACLOCAL@
+AMDEP_FALSE = @AMDEP_FALSE@
+AMDEP_TRUE = @AMDEP_TRUE@
+AMTAR = @AMTAR@
+AM_CPPFLAGS = @AM_CPPFLAGS@
+AR = @AR@
+AS = @AS@
+AUTOCONF = @AUTOCONF@
+AUTOHEADER = @AUTOHEADER@
+AUTOMAKE = @AUTOMAKE@
+AWK = @AWK@
+CC = @CC@
+CCAS = @CCAS@
+CCASFLAGS = @CCASFLAGS@
+CCDEPMODE = @CCDEPMODE@
+CFLAGS = @CFLAGS@
+CPLUSPLUS_FALSE = @CPLUSPLUS_FALSE@
+CPLUSPLUS_TRUE = @CPLUSPLUS_TRUE@
+CPP = @CPP@
+CPPFLAGS = @CPPFLAGS@
+CXX = @CXX@
+CXXCPP = @CXXCPP@
+CXXDEPMODE = @CXXDEPMODE@
+CXXFLAGS = @CXXFLAGS@
+CYGPATH_W = @CYGPATH_W@
+DEFS = @DEFS@
+DEPDIR = @DEPDIR@
+ECHO_C = @ECHO_C@
+ECHO_N = @ECHO_N@
+ECHO_T = @ECHO_T@
+EGREP = @EGREP@
+EXEEXT = @EXEEXT@
+EXTRA_TEST_LIBS = @EXTRA_TEST_LIBS@
+GC_CFLAGS = @GC_CFLAGS@
+INSTALL_DATA = @INSTALL_DATA@
+INSTALL_PROGRAM = @INSTALL_PROGRAM@
+INSTALL_SCRIPT = @INSTALL_SCRIPT@
+INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
+LIBOBJS = @LIBOBJS@
+LIBS = @LIBS@
+LIBTOOL = @LIBTOOL@
+LN_S = @LN_S@
+LTLIBOBJS = @LTLIBOBJS@
+MAINT = @MAINT@
+MAINTAINER_MODE_FALSE = @MAINTAINER_MODE_FALSE@
+MAINTAINER_MODE_TRUE = @MAINTAINER_MODE_TRUE@
+MAKEINFO = @MAKEINFO@
+MY_CFLAGS = @MY_CFLAGS@
+OBJEXT = @OBJEXT@
+PACKAGE = @PACKAGE@
+PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
+PACKAGE_NAME = @PACKAGE_NAME@
+PACKAGE_STRING = @PACKAGE_STRING@
+PACKAGE_TARNAME = @PACKAGE_TARNAME@
+PACKAGE_VERSION = @PACKAGE_VERSION@
+PATH_SEPARATOR = @PATH_SEPARATOR@
+POWERPC_DARWIN_FALSE = @POWERPC_DARWIN_FALSE@
+POWERPC_DARWIN_TRUE = @POWERPC_DARWIN_TRUE@
+RANLIB = @RANLIB@
+SET_MAKE = @SET_MAKE@
+SHELL = @SHELL@
+STRIP = @STRIP@
+THREADLIBS = @THREADLIBS@
+VERSION = @VERSION@
+ac_ct_AR = @ac_ct_AR@
+ac_ct_AS = @ac_ct_AS@
+ac_ct_CC = @ac_ct_CC@
+ac_ct_CXX = @ac_ct_CXX@
+ac_ct_RANLIB = @ac_ct_RANLIB@
+ac_ct_STRIP = @ac_ct_STRIP@
+addincludes = @addincludes@
+addlibs = @addlibs@
+addobjs = @addobjs@
+addtests = @addtests@
+am__fastdepCC_FALSE = @am__fastdepCC_FALSE@
+am__fastdepCC_TRUE = @am__fastdepCC_TRUE@
+am__fastdepCXX_FALSE = @am__fastdepCXX_FALSE@
+am__fastdepCXX_TRUE = @am__fastdepCXX_TRUE@
+am__include = @am__include@
+am__leading_dot = @am__leading_dot@
+am__quote = @am__quote@
+am__tar = @am__tar@
+am__untar = @am__untar@
+bindir = @bindir@
+build = @build@
+build_alias = @build_alias@
+build_cpu = @build_cpu@
+build_os = @build_os@
+build_vendor = @build_vendor@
+datadir = @datadir@
+exec_prefix = @exec_prefix@
+extra_ldflags_libgc = @extra_ldflags_libgc@
+host = @host@
+host_alias = @host_alias@
+host_cpu = @host_cpu@
+host_os = @host_os@
+host_vendor = @host_vendor@
+includedir = @includedir@
+infodir = @infodir@
+install_sh = @install_sh@
+libdir = @libdir@
+libexecdir = @libexecdir@
+localstatedir = @localstatedir@
+mandir = @mandir@
+mkdir_p = @mkdir_p@
+mkinstalldirs = @mkinstalldirs@
+multi_basedir = @multi_basedir@
+oldincludedir = @oldincludedir@
+prefix = @prefix@
+program_transform_name = @program_transform_name@
+sbindir = @sbindir@
+sharedstatedir = @sharedstatedir@
+sysconfdir = @sysconfdir@
+target = @target@
+target_alias = @target_alias@
+target_all = @target_all@
+target_cpu = @target_cpu@
+target_noncanonical = @target_noncanonical@
+target_os = @target_os@
+target_vendor = @target_vendor@
+toolexecdir = @toolexecdir@
+toolexeclibdir = @toolexeclibdir@
+AUTOMAKE_OPTIONS = foreign
+noinst_HEADERS = gc.h gc_backptr.h gc_local_alloc.h \
+ gc_pthread_redirects.h gc_cpp.h
+
+all: gc_config.h gc_ext_config.h
+ $(MAKE) $(AM_MAKEFLAGS) all-am
+
+.SUFFIXES:
+$(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps)
+ @for dep in $?; do \
+ case '$(am__configure_deps)' in \
+ *$$dep*) \
+ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh \
+ && exit 0; \
+ exit 1;; \
+ esac; \
+ done; \
+ echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign include/Makefile'; \
+ cd $(top_srcdir) && \
+ $(AUTOMAKE) --foreign include/Makefile
+.PRECIOUS: Makefile
+Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
+ @case '$?' in \
+ *config.status*) \
+ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
+ *) \
+ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
+ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
+ esac;
+
+$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
+ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
+
+$(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps)
+ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
+$(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps)
+ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
+
+gc_config.h: stamp-h1
+ @if test ! -f $@; then \
+ rm -f stamp-h1; \
+ $(MAKE) stamp-h1; \
+ else :; fi
+
+stamp-h1: $(srcdir)/gc_config.h.in $(top_builddir)/config.status
+ @rm -f stamp-h1
+ cd $(top_builddir) && $(SHELL) ./config.status include/gc_config.h
+$(srcdir)/gc_config.h.in: @MAINTAINER_MODE_TRUE@ $(am__configure_deps)
+ cd $(top_srcdir) && $(AUTOHEADER)
+ rm -f stamp-h1
+ touch $@
+
+gc_ext_config.h: stamp-h2
+ @if test ! -f $@; then \
+ rm -f stamp-h2; \
+ $(MAKE) stamp-h2; \
+ else :; fi
+
+stamp-h2: $(srcdir)/gc_ext_config.h.in $(top_builddir)/config.status
+ @rm -f stamp-h2
+ cd $(top_builddir) && $(SHELL) ./config.status include/gc_ext_config.h
+
+distclean-hdr:
+ -rm -f gc_config.h stamp-h1 gc_ext_config.h stamp-h2
+
+mostlyclean-libtool:
+ -rm -f *.lo
+
+clean-libtool:
+ -rm -rf .libs _libs
+
+distclean-libtool:
+ -rm -f libtool
+uninstall-info-am:
+
+ID: $(HEADERS) $(SOURCES) $(LISP) $(TAGS_FILES)
+ list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
+ unique=`for i in $$list; do \
+ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
+ done | \
+ $(AWK) ' { files[$$0] = 1; } \
+ END { for (i in files) print i; }'`; \
+ mkid -fID $$unique
+tags: TAGS
+
+TAGS: $(HEADERS) $(SOURCES) gc_config.h.in gc_ext_config.h.in $(TAGS_DEPENDENCIES) \
+ $(TAGS_FILES) $(LISP)
+ tags=; \
+ here=`pwd`; \
+ list='$(SOURCES) $(HEADERS) gc_config.h.in gc_ext_config.h.in $(LISP) $(TAGS_FILES)'; \
+ unique=`for i in $$list; do \
+ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
+ done | \
+ $(AWK) ' { files[$$0] = 1; } \
+ END { for (i in files) print i; }'`; \
+ if test -z "$(ETAGS_ARGS)$$tags$$unique"; then :; else \
+ test -n "$$unique" || unique=$$empty_fix; \
+ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
+ $$tags $$unique; \
+ fi
+ctags: CTAGS
+CTAGS: $(HEADERS) $(SOURCES) gc_config.h.in gc_ext_config.h.in $(TAGS_DEPENDENCIES) \
+ $(TAGS_FILES) $(LISP)
+ tags=; \
+ here=`pwd`; \
+ list='$(SOURCES) $(HEADERS) gc_config.h.in gc_ext_config.h.in $(LISP) $(TAGS_FILES)'; \
+ unique=`for i in $$list; do \
+ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
+ done | \
+ $(AWK) ' { files[$$0] = 1; } \
+ END { for (i in files) print i; }'`; \
+ test -z "$(CTAGS_ARGS)$$tags$$unique" \
+ || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \
+ $$tags $$unique
+
+GTAGS:
+ here=`$(am__cd) $(top_builddir) && pwd` \
+ && cd $(top_srcdir) \
+ && gtags -i $(GTAGS_ARGS) $$here
+
+distclean-tags:
+ -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
+
+distdir: $(DISTFILES)
+ @srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
+ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's|.|.|g'`; \
+ list='$(DISTFILES)'; for file in $$list; do \
+ case $$file in \
+ $(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \
+ $(top_srcdir)/*) file=`echo "$$file" | sed "s|^$$topsrcdirstrip/|$(top_builddir)/|"`;; \
+ esac; \
+ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
+ dir=`echo "$$file" | sed -e 's,/[^/]*$$,,'`; \
+ if test "$$dir" != "$$file" && test "$$dir" != "."; then \
+ dir="/$$dir"; \
+ $(mkdir_p) "$(distdir)$$dir"; \
+ else \
+ dir=''; \
+ fi; \
+ if test -d $$d/$$file; then \
+ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
+ cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \
+ fi; \
+ cp -pR $$d/$$file $(distdir)$$dir || exit 1; \
+ else \
+ test -f $(distdir)/$$file \
+ || cp -p $$d/$$file $(distdir)/$$file \
+ || exit 1; \
+ fi; \
+ done
+check-am: all-am
+check: check-am
+all-am: Makefile $(HEADERS) gc_config.h gc_ext_config.h
+installdirs:
+install: install-am
+install-exec: install-exec-am
+install-data: install-data-am
+uninstall: uninstall-am
+
+install-am: all-am
+ @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
+
+installcheck: installcheck-am
+install-strip:
+ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
+ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
+ `test -z '$(STRIP)' || \
+ echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install
+mostlyclean-generic:
+
+clean-generic:
+
+distclean-generic:
+ -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
+
+maintainer-clean-generic:
+ @echo "This command is intended for maintainers to use"
+ @echo "it deletes files that may require special tools to rebuild."
+clean: clean-am
+
+clean-am: clean-generic clean-libtool mostlyclean-am
+
+distclean: distclean-am
+ -rm -f Makefile
+distclean-am: clean-am distclean-generic distclean-hdr \
+ distclean-libtool distclean-tags
+
+dvi: dvi-am
+
+dvi-am:
+
+html: html-am
+
+info: info-am
+
+info-am:
+
+install-data-am:
+
+install-exec-am:
+
+install-info: install-info-am
+
+install-man:
+
+installcheck-am:
+
+maintainer-clean: maintainer-clean-am
+ -rm -f Makefile
+maintainer-clean-am: distclean-am maintainer-clean-generic
+
+mostlyclean: mostlyclean-am
+
+mostlyclean-am: mostlyclean-generic mostlyclean-libtool
+
+pdf: pdf-am
+
+pdf-am:
+
+ps: ps-am
+
+ps-am:
+
+uninstall-am: uninstall-info-am
+
+.PHONY: CTAGS GTAGS all all-am check check-am clean clean-generic \
+ clean-libtool ctags distclean distclean-generic distclean-hdr \
+ distclean-libtool distclean-tags distdir dvi dvi-am html \
+ html-am info info-am install install-am install-data \
+ install-data-am install-exec install-exec-am install-info \
+ install-info-am install-man install-strip installcheck \
+ installcheck-am installdirs maintainer-clean \
+ maintainer-clean-generic mostlyclean mostlyclean-generic \
+ mostlyclean-libtool pdf pdf-am ps ps-am tags uninstall \
+ uninstall-am uninstall-info-am
+
+# Tell versions [3.59,3.63) of GNU make to not export all variables.
+# Otherwise a system limit (for SysV at least) may be exceeded.
+.NOEXPORT:
Added: llvm-gcc-4.2/trunk/boehm-gc/include/cord.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/cord.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/cord.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/cord.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,327 @@
+/*
+ * Copyright (c) 1993-1994 by Xerox Corporation. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ *
+ * Author: Hans-J. Boehm (boehm at parc.xerox.com)
+ */
+/* Boehm, October 5, 1995 4:20 pm PDT */
+
+/*
+ * Cords are immutable character strings. A number of operations
+ * on long cords are much more efficient than their strings.h counterpart.
+ * In particular, concatenation takes constant time independent of the length
+ * of the arguments. (Cords are represented as trees, with internal
+ * nodes representing concatenation and leaves consisting of either C
+ * strings or a functional description of the string.)
+ *
+ * The following are reasonable applications of cords. They would perform
+ * unacceptably if C strings were used:
+ * - A compiler that produces assembly language output by repeatedly
+ * concatenating instructions onto a cord representing the output file.
+ * - A text editor that converts the input file to a cord, and then
+ * performs editing operations by producing a new cord representing
+ * the file after echa character change (and keeping the old ones in an
+ * edit history)
+ *
+ * For optimal performance, cords should be built by
+ * concatenating short sections.
+ * This interface is designed for maximum compatibility with C strings.
+ * ASCII NUL characters may be embedded in cords using CORD_from_fn.
+ * This is handled correctly, but CORD_to_char_star will produce a string
+ * with embedded NULs when given such a cord.
+ *
+ * This interface is fairly big, largely for performance reasons.
+ * The most basic constants and functions:
+ *
+ * CORD - the type of a cord;
+ * CORD_EMPTY - empty cord;
+ * CORD_len(cord) - length of a cord;
+ * CORD_cat(cord1,cord2) - concatenation of two cords;
+ * CORD_substr(cord, start, len) - substring (or subcord);
+ * CORD_pos i; CORD_FOR(i, cord) { ... CORD_pos_fetch(i) ... } -
+ * examine each character in a cord. CORD_pos_fetch(i) is the char.
+ * CORD_fetch(int i) - Retrieve i'th character (slowly).
+ * CORD_cmp(cord1, cord2) - compare two cords.
+ * CORD_from_file(FILE * f) - turn a read-only file into a cord.
+ * CORD_to_char_star(cord) - convert to C string.
+ * (Non-NULL C constant strings are cords.)
+ * CORD_printf (etc.) - cord version of printf. Use %r for cords.
+ */
+# ifndef CORD_H
+
+# define CORD_H
+# include <stddef.h>
+# include <stdio.h>
+/* Cords have type const char *. This is cheating quite a bit, and not */
+/* 100% portable. But it means that nonempty character string */
+/* constants may be used as cords directly, provided the string is */
+/* never modified in place. The empty cord is represented by, and */
+/* can be written as, 0. */
+
+typedef const char * CORD;
+
+/* An empty cord is always represented as nil */
+# define CORD_EMPTY 0
+
+/* Is a nonempty cord represented as a C string? */
+#define CORD_IS_STRING(s) (*(s) != '\0')
+
+/* Concatenate two cords. If the arguments are C strings, they may */
+/* not be subsequently altered. */
+CORD CORD_cat(CORD x, CORD y);
+
+/* Concatenate a cord and a C string with known length. Except for the */
+/* empty string case, this is a special case of CORD_cat. Since the */
+/* length is known, it can be faster. */
+/* The string y is shared with the resulting CORD. Hence it should */
+/* not be altered by the caller. */
+CORD CORD_cat_char_star(CORD x, const char * y, size_t leny);
+
+/* Compute the length of a cord */
+size_t CORD_len(CORD x);
+
+/* Cords may be represented by functions defining the ith character */
+typedef char (* CORD_fn)(size_t i, void * client_data);
+
+/* Turn a functional description into a cord. */
+CORD CORD_from_fn(CORD_fn fn, void * client_data, size_t len);
+
+/* Return the substring (subcord really) of x with length at most n, */
+/* starting at position i. (The initial character has position 0.) */
+CORD CORD_substr(CORD x, size_t i, size_t n);
+
+/* Return the argument, but rebalanced to allow more efficient */
+/* character retrieval, substring operations, and comparisons. */
+/* This is useful only for cords that were built using repeated */
+/* concatenation. Guarantees log time access to the result, unless */
+/* x was obtained through a large number of repeated substring ops */
+/* or the embedded functional descriptions take longer to evaluate. */
+/* May reallocate significant parts of the cord. The argument is not */
+/* modified; only the result is balanced. */
+CORD CORD_balance(CORD x);
+
+/* The following traverse a cord by applying a function to each */
+/* character. This is occasionally appropriate, especially where */
+/* speed is crucial. But, since C doesn't have nested functions, */
+/* clients of this sort of traversal are clumsy to write. Consider */
+/* the functions that operate on cord positions instead. */
+
+/* Function to iteratively apply to individual characters in cord. */
+typedef int (* CORD_iter_fn)(char c, void * client_data);
+
+/* Function to apply to substrings of a cord. Each substring is a */
+/* a C character string, not a general cord. */
+typedef int (* CORD_batched_iter_fn)(const char * s, void * client_data);
+# define CORD_NO_FN ((CORD_batched_iter_fn)0)
+
+/* Apply f1 to each character in the cord, in ascending order, */
+/* starting at position i. If */
+/* f2 is not CORD_NO_FN, then multiple calls to f1 may be replaced by */
+/* a single call to f2. The parameter f2 is provided only to allow */
+/* some optimization by the client. This terminates when the right */
+/* end of this string is reached, or when f1 or f2 return != 0. In the */
+/* latter case CORD_iter returns != 0. Otherwise it returns 0. */
+/* The specified value of i must be < CORD_len(x). */
+int CORD_iter5(CORD x, size_t i, CORD_iter_fn f1,
+ CORD_batched_iter_fn f2, void * client_data);
+
+/* A simpler version that starts at 0, and without f2: */
+int CORD_iter(CORD x, CORD_iter_fn f1, void * client_data);
+# define CORD_iter(x, f1, cd) CORD_iter5(x, 0, f1, CORD_NO_FN, cd)
+
+/* Similar to CORD_iter5, but end-to-beginning. No provisions for */
+/* CORD_batched_iter_fn. */
+int CORD_riter4(CORD x, size_t i, CORD_iter_fn f1, void * client_data);
+
+/* A simpler version that starts at the end: */
+int CORD_riter(CORD x, CORD_iter_fn f1, void * client_data);
+
+/* Functions that operate on cord positions. The easy way to traverse */
+/* cords. A cord position is logically a pair consisting of a cord */
+/* and an index into that cord. But it is much faster to retrieve a */
+/* charcter based on a position than on an index. Unfortunately, */
+/* positions are big (order of a few 100 bytes), so allocate them with */
+/* caution. */
+/* Things in cord_pos.h should be treated as opaque, except as */
+/* described below. Also note that */
+/* CORD_pos_fetch, CORD_next and CORD_prev have both macro and function */
+/* definitions. The former may evaluate their argument more than once. */
+# include "private/cord_pos.h"
+
+/*
+ Visible definitions from above:
+
+ typedef <OPAQUE but fairly big> CORD_pos[1];
+
+ * Extract the cord from a position:
+ CORD CORD_pos_to_cord(CORD_pos p);
+
+ * Extract the current index from a position:
+ size_t CORD_pos_to_index(CORD_pos p);
+
+ * Fetch the character located at the given position:
+ char CORD_pos_fetch(CORD_pos p);
+
+ * Initialize the position to refer to the given cord and index.
+ * Note that this is the most expensive function on positions:
+ void CORD_set_pos(CORD_pos p, CORD x, size_t i);
+
+ * Advance the position to the next character.
+ * P must be initialized and valid.
+ * Invalidates p if past end:
+ void CORD_next(CORD_pos p);
+
+ * Move the position to the preceding character.
+ * P must be initialized and valid.
+ * Invalidates p if past beginning:
+ void CORD_prev(CORD_pos p);
+
+ * Is the position valid, i.e. inside the cord?
+ int CORD_pos_valid(CORD_pos p);
+*/
+# define CORD_FOR(pos, cord) \
+ for (CORD_set_pos(pos, cord, 0); CORD_pos_valid(pos); CORD_next(pos))
+
+
+/* An out of memory handler to call. May be supplied by client. */
+/* Must not return. */
+extern void (* CORD_oom_fn)(void);
+
+/* Dump the representation of x to stdout in an implementation defined */
+/* manner. Intended for debugging only. */
+void CORD_dump(CORD x);
+
+/* The following could easily be implemented by the client. They are */
+/* provided in cordxtra.c for convenience. */
+
+/* Concatenate a character to the end of a cord. */
+CORD CORD_cat_char(CORD x, char c);
+
+/* Concatenate n cords. */
+CORD CORD_catn(int n, /* CORD */ ...);
+
+/* Return the character in CORD_substr(x, i, 1) */
+char CORD_fetch(CORD x, size_t i);
+
+/* Return < 0, 0, or > 0, depending on whether x < y, x = y, x > y */
+int CORD_cmp(CORD x, CORD y);
+
+/* A generalization that takes both starting positions for the */
+/* comparison, and a limit on the number of characters to be compared. */
+int CORD_ncmp(CORD x, size_t x_start, CORD y, size_t y_start, size_t len);
+
+/* Find the first occurrence of s in x at position start or later. */
+/* Return the position of the first character of s in x, or */
+/* CORD_NOT_FOUND if there is none. */
+size_t CORD_str(CORD x, size_t start, CORD s);
+
+/* Return a cord consisting of i copies of (possibly NUL) c. Dangerous */
+/* in conjunction with CORD_to_char_star. */
+/* The resulting representation takes constant space, independent of i. */
+CORD CORD_chars(char c, size_t i);
+# define CORD_nul(i) CORD_chars('\0', (i))
+
+/* Turn a file into cord. The file must be seekable. Its contents */
+/* must remain constant. The file may be accessed as an immediate */
+/* result of this call and/or as a result of subsequent accesses to */
+/* the cord. Short files are likely to be immediately read, but */
+/* long files are likely to be read on demand, possibly relying on */
+/* stdio for buffering. */
+/* We must have exclusive access to the descriptor f, i.e. we may */
+/* read it at any time, and expect the file pointer to be */
+/* where we left it. Normally this should be invoked as */
+/* CORD_from_file(fopen(...)) */
+/* CORD_from_file arranges to close the file descriptor when it is no */
+/* longer needed (e.g. when the result becomes inaccessible). */
+/* The file f must be such that ftell reflects the actual character */
+/* position in the file, i.e. the number of characters that can be */
+/* or were read with fread. On UNIX systems this is always true. On */
+/* MS Windows systems, f must be opened in binary mode. */
+CORD CORD_from_file(FILE * f);
+
+/* Equivalent to the above, except that the entire file will be read */
+/* and the file pointer will be closed immediately. */
+/* The binary mode restriction from above does not apply. */
+CORD CORD_from_file_eager(FILE * f);
+
+/* Equivalent to the above, except that the file will be read on demand.*/
+/* The binary mode restriction applies. */
+CORD CORD_from_file_lazy(FILE * f);
+
+/* Turn a cord into a C string. The result shares no structure with */
+/* x, and is thus modifiable. */
+char * CORD_to_char_star(CORD x);
+
+/* Turn a C string into a CORD. The C string is copied, and so may */
+/* subsequently be modified. */
+CORD CORD_from_char_star(const char *s);
+
+/* Identical to the above, but the result may share structure with */
+/* the argument and is thus not modifiable. */
+const char * CORD_to_const_char_star(CORD x);
+
+/* Write a cord to a file, starting at the current position. No */
+/* trailing NULs are newlines are added. */
+/* Returns EOF if a write error occurs, 1 otherwise. */
+int CORD_put(CORD x, FILE * f);
+
+/* "Not found" result for the following two functions. */
+# define CORD_NOT_FOUND ((size_t)(-1))
+
+/* A vague analog of strchr. Returns the position (an integer, not */
+/* a pointer) of the first occurrence of (char) c inside x at position */
+/* i or later. The value i must be < CORD_len(x). */
+size_t CORD_chr(CORD x, size_t i, int c);
+
+/* A vague analog of strrchr. Returns index of the last occurrence */
+/* of (char) c inside x at position i or earlier. The value i */
+/* must be < CORD_len(x). */
+size_t CORD_rchr(CORD x, size_t i, int c);
+
+
+/* The following are also not primitive, but are implemented in */
+/* cordprnt.c. They provide functionality similar to the ANSI C */
+/* functions with corresponding names, but with the following */
+/* additions and changes: */
+/* 1. A %r conversion specification specifies a CORD argument. Field */
+/* width, precision, etc. have the same semantics as for %s. */
+/* (Note that %c,%C, and %S were already taken.) */
+/* 2. The format string is represented as a CORD. */
+/* 3. CORD_sprintf and CORD_vsprintf assign the result through the 1st */ /* argument. Unlike their ANSI C versions, there is no need to guess */
+/* the correct buffer size. */
+/* 4. Most of the conversions are implement through the native */
+/* vsprintf. Hence they are usually no faster, and */
+/* idiosyncracies of the native printf are preserved. However, */
+/* CORD arguments to CORD_sprintf and CORD_vsprintf are NOT copied; */
+/* the result shares the original structure. This may make them */
+/* very efficient in some unusual applications. */
+/* The format string is copied. */
+/* All functions return the number of characters generated or -1 on */
+/* error. This complies with the ANSI standard, but is inconsistent */
+/* with some older implementations of sprintf. */
+
+/* The implementation of these is probably less portable than the rest */
+/* of this package. */
+
+#ifndef CORD_NO_IO
+
+#include <stdarg.h>
+
+int CORD_sprintf(CORD * out, CORD format, ...);
+int CORD_vsprintf(CORD * out, CORD format, va_list args);
+int CORD_fprintf(FILE * f, CORD format, ...);
+int CORD_vfprintf(FILE * f, CORD format, va_list args);
+int CORD_printf(CORD format, ...);
+int CORD_vprintf(CORD format, va_list args);
+
+#endif /* CORD_NO_IO */
+
+# endif /* CORD_H */
Added: llvm-gcc-4.2/trunk/boehm-gc/include/ec.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/ec.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/ec.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/ec.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,70 @@
+# ifndef EC_H
+# define EC_H
+
+# ifndef CORD_H
+# include "cord.h"
+# endif
+
+/* Extensible cords are strings that may be destructively appended to. */
+/* They allow fast construction of cords from characters that are */
+/* being read from a stream. */
+/*
+ * A client might look like:
+ *
+ * {
+ * CORD_ec x;
+ * CORD result;
+ * char c;
+ * FILE *f;
+ *
+ * ...
+ * CORD_ec_init(x);
+ * while(...) {
+ * c = getc(f);
+ * ...
+ * CORD_ec_append(x, c);
+ * }
+ * result = CORD_balance(CORD_ec_to_cord(x));
+ *
+ * If a C string is desired as the final result, the call to CORD_balance
+ * may be replaced by a call to CORD_to_char_star.
+ */
+
+# ifndef CORD_BUFSZ
+# define CORD_BUFSZ 128
+# endif
+
+typedef struct CORD_ec_struct {
+ CORD ec_cord;
+ char * ec_bufptr;
+ char ec_buf[CORD_BUFSZ+1];
+} CORD_ec[1];
+
+/* This structure represents the concatenation of ec_cord with */
+/* ec_buf[0 ... (ec_bufptr-ec_buf-1)] */
+
+/* Flush the buffer part of the extended chord into ec_cord. */
+/* Note that this is almost the only real function, and it is */
+/* implemented in 6 lines in cordxtra.c */
+void CORD_ec_flush_buf(CORD_ec x);
+
+/* Convert an extensible cord to a cord. */
+# define CORD_ec_to_cord(x) (CORD_ec_flush_buf(x), (x)[0].ec_cord)
+
+/* Initialize an extensible cord. */
+# define CORD_ec_init(x) ((x)[0].ec_cord = 0, (x)[0].ec_bufptr = (x)[0].ec_buf)
+
+/* Append a character to an extensible cord. */
+# define CORD_ec_append(x, c) \
+ { \
+ if ((x)[0].ec_bufptr == (x)[0].ec_buf + CORD_BUFSZ) { \
+ CORD_ec_flush_buf(x); \
+ } \
+ *((x)[0].ec_bufptr)++ = (c); \
+ }
+
+/* Append a cord to an extensible cord. Structure remains shared with */
+/* original. */
+void CORD_ec_append_cord(CORD_ec x, CORD s);
+
+# endif /* EC_H */
Added: llvm-gcc-4.2/trunk/boehm-gc/include/gc.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/gc.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/gc.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/gc.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,1071 @@
+/*
+ * Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
+ * Copyright (c) 1991-1995 by Xerox Corporation. All rights reserved.
+ * Copyright 1996-1999 by Silicon Graphics. All rights reserved.
+ * Copyright 1999 by Hewlett-Packard Company. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ */
+
+/*
+ * Note that this defines a large number of tuning hooks, which can
+ * safely be ignored in nearly all cases. For normal use it suffices
+ * to call only GC_MALLOC and perhaps GC_REALLOC.
+ * For better performance, also look at GC_MALLOC_ATOMIC, and
+ * GC_enable_incremental. If you need an action to be performed
+ * immediately before an object is collected, look at GC_register_finalizer.
+ * If you are using Solaris threads, look at the end of this file.
+ * Everything else is best ignored unless you encounter performance
+ * problems.
+ */
+
+#ifndef _GC_H
+
+# define _GC_H
+
+/*
+ * As this header includes gc_config.h, preprocessor conflicts can occur with
+ * clients that include their own autoconf headers. The following #undef's
+ * work around some likely conflicts.
+ */
+
+# ifdef PACKAGE_NAME
+# undef PACKAGE_NAME
+# endif
+# ifdef PACKAGE_BUGREPORT
+# undef PACKAGE_BUGREPORT
+# endif
+# ifdef PACKAGE_STRING
+# undef PACKAGE_STRING
+# endif
+# ifdef PACKAGE_TARNAME
+# undef PACKAGE_TARNAME
+# endif
+# ifdef PACKAGE_VERSION
+# undef PACKAGE_VERSION
+# endif
+
+# include <gc_config.h>
+# include "gc_config_macros.h"
+
+# if defined(__STDC__) || defined(__cplusplus) || defined(_AIX)
+# define GC_PROTO(args) args
+ typedef void * GC_PTR;
+# define GC_CONST const
+# else
+# define GC_PROTO(args) ()
+ typedef char * GC_PTR;
+# define GC_CONST
+# endif
+
+# ifdef __cplusplus
+ extern "C" {
+# endif
+
+/* Define word and signed_word to be unsigned and signed types of the */
+/* size as char * or void *. There seems to be no way to do this */
+/* even semi-portably. The following is probably no better/worse */
+/* than almost anything else. */
+/* The ANSI standard suggests that size_t and ptr_diff_t might be */
+/* better choices. But those had incorrect definitions on some older */
+/* systems. Notably "typedef int size_t" is WRONG. */
+#ifndef _WIN64
+ typedef unsigned long GC_word;
+ typedef long GC_signed_word;
+#else
+ /* Win64 isn't really supported yet, but this is the first step. And */
+ /* it might cause error messages to show up in more plausible places. */
+ /* This needs basetsd.h, which is included by windows.h. */
+ typedef ULONG_PTR GC_word;
+ typedef LONG_PTR GC_word;
+#endif
+
+/* Public read-only variables */
+
+GC_API GC_word GC_gc_no;/* Counter incremented per collection. */
+ /* Includes empty GCs at startup. */
+
+GC_API int GC_parallel; /* GC is parallelized for performance on */
+ /* multiprocessors. Currently set only */
+ /* implicitly if collector is built with */
+ /* -DPARALLEL_MARK and if either: */
+ /* Env variable GC_NPROC is set to > 1, or */
+ /* GC_NPROC is not set and this is an MP. */
+ /* If GC_parallel is set, incremental */
+ /* collection is only partially functional, */
+ /* and may not be desirable. */
+
+
+/* Public R/W variables */
+
+GC_API GC_PTR (*GC_oom_fn) GC_PROTO((size_t bytes_requested));
+ /* When there is insufficient memory to satisfy */
+ /* an allocation request, we return */
+ /* (*GC_oom_fn)(). By default this just */
+ /* returns 0. */
+ /* If it returns, it must return 0 or a valid */
+ /* pointer to a previously allocated heap */
+ /* object. */
+
+GC_API int GC_find_leak;
+ /* Do not actually garbage collect, but simply */
+ /* report inaccessible memory that was not */
+ /* deallocated with GC_free. Initial value */
+ /* is determined by FIND_LEAK macro. */
+
+GC_API int GC_all_interior_pointers;
+ /* Arrange for pointers to object interiors to */
+ /* be recognized as valid. May not be changed */
+ /* after GC initialization. */
+ /* Initial value is determined by */
+ /* -DALL_INTERIOR_POINTERS. */
+ /* Unless DONT_ADD_BYTE_AT_END is defined, this */
+ /* also affects whether sizes are increased by */
+ /* at least a byte to allow "off the end" */
+ /* pointer recognition. */
+ /* MUST BE 0 or 1. */
+
+GC_API int GC_quiet; /* Disable statistics output. Only matters if */
+ /* collector has been compiled with statistics */
+ /* enabled. This involves a performance cost, */
+ /* and is thus not the default. */
+
+GC_API int GC_finalize_on_demand;
+ /* If nonzero, finalizers will only be run in */
+ /* response to an explicit GC_invoke_finalizers */
+ /* call. The default is determined by whether */
+ /* the FINALIZE_ON_DEMAND macro is defined */
+ /* when the collector is built. */
+
+GC_API int GC_java_finalization;
+ /* Mark objects reachable from finalizable */
+ /* objects in a separate postpass. This makes */
+ /* it a bit safer to use non-topologically- */
+ /* ordered finalization. Default value is */
+ /* determined by JAVA_FINALIZATION macro. */
+
+GC_API void (* GC_finalizer_notifier) GC_PROTO((void));
+ /* Invoked by the collector when there are */
+ /* objects to be finalized. Invoked at most */
+ /* once per GC cycle. Never invoked unless */
+ /* GC_finalize_on_demand is set. */
+ /* Typically this will notify a finalization */
+ /* thread, which will call GC_invoke_finalizers */
+ /* in response. */
+
+GC_API int GC_dont_gc; /* != 0 ==> Dont collect. In versions 6.2a1+, */
+ /* this overrides explicit GC_gcollect() calls. */
+ /* Used as a counter, so that nested enabling */
+ /* and disabling work correctly. Should */
+ /* normally be updated with GC_enable() and */
+ /* GC_disable() calls. */
+ /* Direct assignment to GC_dont_gc is */
+ /* deprecated. */
+
+GC_API int GC_dont_expand;
+ /* Dont expand heap unless explicitly requested */
+ /* or forced to. */
+
+GC_API int GC_use_entire_heap;
+ /* Causes the nonincremental collector to use the */
+ /* entire heap before collecting. This was the only */
+ /* option for GC versions < 5.0. This sometimes */
+ /* results in more large block fragmentation, since */
+ /* very larg blocks will tend to get broken up */
+ /* during each GC cycle. It is likely to result in a */
+ /* larger working set, but lower collection */
+ /* frequencies, and hence fewer instructions executed */
+ /* in the collector. */
+
+GC_API int GC_full_freq; /* Number of partial collections between */
+ /* full collections. Matters only if */
+ /* GC_incremental is set. */
+ /* Full collections are also triggered if */
+ /* the collector detects a substantial */
+ /* increase in the number of in-use heap */
+ /* blocks. Values in the tens are now */
+ /* perfectly reasonable, unlike for */
+ /* earlier GC versions. */
+
+GC_API GC_word GC_non_gc_bytes;
+ /* Bytes not considered candidates for collection. */
+ /* Used only to control scheduling of collections. */
+ /* Updated by GC_malloc_uncollectable and GC_free. */
+ /* Wizards only. */
+
+GC_API int GC_no_dls;
+ /* Don't register dynamic library data segments. */
+ /* Wizards only. Should be used only if the */
+ /* application explicitly registers all roots. */
+ /* In Microsoft Windows environments, this will */
+ /* usually also prevent registration of the */
+ /* main data segment as part of the root set. */
+
+GC_API GC_word GC_free_space_divisor;
+ /* We try to make sure that we allocate at */
+ /* least N/GC_free_space_divisor bytes between */
+ /* collections, where N is the heap size plus */
+ /* a rough estimate of the root set size. */
+ /* Initially, GC_free_space_divisor = 3. */
+ /* Increasing its value will use less space */
+ /* but more collection time. Decreasing it */
+ /* will appreciably decrease collection time */
+ /* at the expense of space. */
+ /* GC_free_space_divisor = 1 will effectively */
+ /* disable collections. */
+
+GC_API GC_word GC_max_retries;
+ /* The maximum number of GCs attempted before */
+ /* reporting out of memory after heap */
+ /* expansion fails. Initially 0. */
+
+
+GC_API char *GC_stackbottom; /* Cool end of user stack. */
+ /* May be set in the client prior to */
+ /* calling any GC_ routines. This */
+ /* avoids some overhead, and */
+ /* potentially some signals that can */
+ /* confuse debuggers. Otherwise the */
+ /* collector attempts to set it */
+ /* automatically. */
+ /* For multithreaded code, this is the */
+ /* cold end of the stack for the */
+ /* primordial thread. */
+
+GC_API int GC_dont_precollect; /* Don't collect as part of */
+ /* initialization. Should be set only */
+ /* if the client wants a chance to */
+ /* manually initialize the root set */
+ /* before the first collection. */
+ /* Interferes with blacklisting. */
+ /* Wizards only. */
+
+/* Public procedures */
+
+/* Initialize the collector. This is only required when using thread-local
+ * allocation, since unlike the regular allocation routines, GC_local_malloc
+ * is not self-initializing. If you use GC_local_malloc you should arrange
+ * to call this somehow (e.g. from a constructor) before doing any allocation.
+ */
+GC_API void GC_init GC_PROTO((void));
+
+GC_API unsigned long GC_time_limit;
+ /* If incremental collection is enabled, */
+ /* We try to terminate collections */
+ /* after this many milliseconds. Not a */
+ /* hard time bound. Setting this to */
+ /* GC_TIME_UNLIMITED will essentially */
+ /* disable incremental collection while */
+ /* leaving generational collection */
+ /* enabled. */
+# define GC_TIME_UNLIMITED 999999
+ /* Setting GC_time_limit to this value */
+ /* will disable the "pause time exceeded"*/
+ /* tests. */
+
+/* Public procedures */
+
+/* Initialize the collector. This is only required when using thread-local
+ * allocation, since unlike the regular allocation routines, GC_local_malloc
+ * is not self-initializing. If you use GC_local_malloc you should arrange
+ * to call this somehow (e.g. from a constructor) before doing any allocation.
+ * For win32 threads, it needs to be called explicitly.
+ */
+GC_API void GC_init GC_PROTO((void));
+
+/*
+ * general purpose allocation routines, with roughly malloc calling conv.
+ * The atomic versions promise that no relevant pointers are contained
+ * in the object. The nonatomic versions guarantee that the new object
+ * is cleared. GC_malloc_stubborn promises that no changes to the object
+ * will occur after GC_end_stubborn_change has been called on the
+ * result of GC_malloc_stubborn. GC_malloc_uncollectable allocates an object
+ * that is scanned for pointers to collectable objects, but is not itself
+ * collectable. The object is scanned even if it does not appear to
+ * be reachable. GC_malloc_uncollectable and GC_free called on the resulting
+ * object implicitly update GC_non_gc_bytes appropriately.
+ *
+ * Note that the GC_malloc_stubborn support is stubbed out by default
+ * starting in 6.0. GC_malloc_stubborn is an alias for GC_malloc unless
+ * the collector is built with STUBBORN_ALLOC defined.
+ */
+GC_API GC_PTR GC_malloc GC_PROTO((size_t size_in_bytes));
+GC_API GC_PTR GC_malloc_atomic GC_PROTO((size_t size_in_bytes));
+GC_API GC_PTR GC_malloc_uncollectable GC_PROTO((size_t size_in_bytes));
+GC_API GC_PTR GC_malloc_stubborn GC_PROTO((size_t size_in_bytes));
+
+/* The following is only defined if the library has been suitably */
+/* compiled: */
+GC_API GC_PTR GC_malloc_atomic_uncollectable GC_PROTO((size_t size_in_bytes));
+
+/* Explicitly deallocate an object. Dangerous if used incorrectly. */
+/* Requires a pointer to the base of an object. */
+/* If the argument is stubborn, it should not be changeable when freed. */
+/* An object should not be enable for finalization when it is */
+/* explicitly deallocated. */
+/* GC_free(0) is a no-op, as required by ANSI C for free. */
+GC_API void GC_free GC_PROTO((GC_PTR object_addr));
+
+/*
+ * Stubborn objects may be changed only if the collector is explicitly informed.
+ * The collector is implicitly informed of coming change when such
+ * an object is first allocated. The following routines inform the
+ * collector that an object will no longer be changed, or that it will
+ * once again be changed. Only nonNIL pointer stores into the object
+ * are considered to be changes. The argument to GC_end_stubborn_change
+ * must be exacly the value returned by GC_malloc_stubborn or passed to
+ * GC_change_stubborn. (In the second case it may be an interior pointer
+ * within 512 bytes of the beginning of the objects.)
+ * There is a performance penalty for allowing more than
+ * one stubborn object to be changed at once, but it is acceptable to
+ * do so. The same applies to dropping stubborn objects that are still
+ * changeable.
+ */
+GC_API void GC_change_stubborn GC_PROTO((GC_PTR));
+GC_API void GC_end_stubborn_change GC_PROTO((GC_PTR));
+
+/* Return a pointer to the base (lowest address) of an object given */
+/* a pointer to a location within the object. */
+/* I.e. map an interior pointer to the corresponding bas pointer. */
+/* Note that with debugging allocation, this returns a pointer to the */
+/* actual base of the object, i.e. the debug information, not to */
+/* the base of the user object. */
+/* Return 0 if displaced_pointer doesn't point to within a valid */
+/* object. */
+/* Note that a deallocated object in the garbage collected heap */
+/* may be considered valid, even if it has been deallocated with */
+/* GC_free. */
+GC_API GC_PTR GC_base GC_PROTO((GC_PTR displaced_pointer));
+
+/* Given a pointer to the base of an object, return its size in bytes. */
+/* The returned size may be slightly larger than what was originally */
+/* requested. */
+GC_API size_t GC_size GC_PROTO((GC_PTR object_addr));
+
+/* For compatibility with C library. This is occasionally faster than */
+/* a malloc followed by a bcopy. But if you rely on that, either here */
+/* or with the standard C library, your code is broken. In my */
+/* opinion, it shouldn't have been invented, but now we're stuck. -HB */
+/* The resulting object has the same kind as the original. */
+/* If the argument is stubborn, the result will have changes enabled. */
+/* It is an error to have changes enabled for the original object. */
+/* Follows ANSI comventions for NULL old_object. */
+GC_API GC_PTR GC_realloc
+ GC_PROTO((GC_PTR old_object, size_t new_size_in_bytes));
+
+/* Explicitly increase the heap size. */
+/* Returns 0 on failure, 1 on success. */
+GC_API int GC_expand_hp GC_PROTO((size_t number_of_bytes));
+
+/* Limit the heap size to n bytes. Useful when you're debugging, */
+/* especially on systems that don't handle running out of memory well. */
+/* n == 0 ==> unbounded. This is the default. */
+GC_API void GC_set_max_heap_size GC_PROTO((GC_word n));
+
+/* Inform the collector that a certain section of statically allocated */
+/* memory contains no pointers to garbage collected memory. Thus it */
+/* need not be scanned. This is sometimes important if the application */
+/* maps large read/write files into the address space, which could be */
+/* mistaken for dynamic library data segments on some systems. */
+GC_API void GC_exclude_static_roots GC_PROTO((GC_PTR start, GC_PTR finish));
+
+/* Clear the set of root segments. Wizards only. */
+GC_API void GC_clear_roots GC_PROTO((void));
+
+/* Add a root segment. Wizards only. */
+GC_API void GC_add_roots GC_PROTO((char * low_address,
+ char * high_address_plus_1));
+
+/* Remove a root segment. Wizards only. */
+GC_API void GC_remove_roots GC_PROTO((char * low_address,
+ char * high_address_plus_1));
+
+/* Add a displacement to the set of those considered valid by the */
+/* collector. GC_register_displacement(n) means that if p was returned */
+/* by GC_malloc, then (char *)p + n will be considered to be a valid */
+/* pointer to p. N must be small and less than the size of p. */
+/* (All pointers to the interior of objects from the stack are */
+/* considered valid in any case. This applies to heap objects and */
+/* static data.) */
+/* Preferably, this should be called before any other GC procedures. */
+/* Calling it later adds to the probability of excess memory */
+/* retention. */
+/* This is a no-op if the collector has recognition of */
+/* arbitrary interior pointers enabled, which is now the default. */
+GC_API void GC_register_displacement GC_PROTO((GC_word n));
+
+/* The following version should be used if any debugging allocation is */
+/* being done. */
+GC_API void GC_debug_register_displacement GC_PROTO((GC_word n));
+
+/* Explicitly trigger a full, world-stop collection. */
+GC_API void GC_gcollect GC_PROTO((void));
+
+/* Trigger a full world-stopped collection. Abort the collection if */
+/* and when stop_func returns a nonzero value. Stop_func will be */
+/* called frequently, and should be reasonably fast. This works even */
+/* if virtual dirty bits, and hence incremental collection is not */
+/* available for this architecture. Collections can be aborted faster */
+/* than normal pause times for incremental collection. However, */
+/* aborted collections do no useful work; the next collection needs */
+/* to start from the beginning. */
+/* Return 0 if the collection was aborted, 1 if it succeeded. */
+typedef int (* GC_stop_func) GC_PROTO((void));
+GC_API int GC_try_to_collect GC_PROTO((GC_stop_func stop_func));
+
+/* Return the number of bytes in the heap. Excludes collector private */
+/* data structures. Includes empty blocks and fragmentation loss. */
+/* Includes some pages that were allocated but never written. */
+GC_API size_t GC_get_heap_size GC_PROTO((void));
+
+/* Return a lower bound on the number of free bytes in the heap. */
+GC_API size_t GC_get_free_bytes GC_PROTO((void));
+
+/* Return the number of bytes allocated since the last collection. */
+GC_API size_t GC_get_bytes_since_gc GC_PROTO((void));
+
+/* Return the total number of bytes allocated in this process. */
+/* Never decreases, except due to wrapping. */
+GC_API size_t GC_get_total_bytes GC_PROTO((void));
+
+/* Disable garbage collection. Even GC_gcollect calls will be */
+/* ineffective. */
+GC_API void GC_disable GC_PROTO((void));
+
+/* Reenable garbage collection. GC_disable() and GC_enable() calls */
+/* nest. Garbage collection is enabled if the number of calls to both */
+/* both functions is equal. */
+GC_API void GC_enable GC_PROTO((void));
+
+/* Enable incremental/generational collection. */
+/* Not advisable unless dirty bits are */
+/* available or most heap objects are */
+/* pointerfree(atomic) or immutable. */
+/* Don't use in leak finding mode. */
+/* Ignored if GC_dont_gc is true. */
+/* Only the generational piece of this is */
+/* functional if GC_parallel is TRUE */
+/* or if GC_time_limit is GC_TIME_UNLIMITED. */
+/* Causes GC_local_gcj_malloc() to revert to */
+/* locked allocation. Must be called */
+/* before any GC_local_gcj_malloc() calls. */
+GC_API void GC_enable_incremental GC_PROTO((void));
+
+/* Does incremental mode write-protect pages? Returns zero or */
+/* more of the following, or'ed together: */
+#define GC_PROTECTS_POINTER_HEAP 1 /* May protect non-atomic objs. */
+#define GC_PROTECTS_PTRFREE_HEAP 2
+#define GC_PROTECTS_STATIC_DATA 4 /* Curently never. */
+#define GC_PROTECTS_STACK 8 /* Probably impractical. */
+
+#define GC_PROTECTS_NONE 0
+GC_API int GC_incremental_protection_needs GC_PROTO((void));
+
+/* Perform some garbage collection work, if appropriate. */
+/* Return 0 if there is no more work to be done. */
+/* Typically performs an amount of work corresponding roughly */
+/* to marking from one page. May do more work if further */
+/* progress requires it, e.g. if incremental collection is */
+/* disabled. It is reasonable to call this in a wait loop */
+/* until it returns 0. */
+GC_API int GC_collect_a_little GC_PROTO((void));
+
+/* Allocate an object of size lb bytes. The client guarantees that */
+/* as long as the object is live, it will be referenced by a pointer */
+/* that points to somewhere within the first 256 bytes of the object. */
+/* (This should normally be declared volatile to prevent the compiler */
+/* from invalidating this assertion.) This routine is only useful */
+/* if a large array is being allocated. It reduces the chance of */
+/* accidentally retaining such an array as a result of scanning an */
+/* integer that happens to be an address inside the array. (Actually, */
+/* it reduces the chance of the allocator not finding space for such */
+/* an array, since it will try hard to avoid introducing such a false */
+/* reference.) On a SunOS 4.X or MS Windows system this is recommended */
+/* for arrays likely to be larger than 100K or so. For other systems, */
+/* or if the collector is not configured to recognize all interior */
+/* pointers, the threshold is normally much higher. */
+GC_API GC_PTR GC_malloc_ignore_off_page GC_PROTO((size_t lb));
+GC_API GC_PTR GC_malloc_atomic_ignore_off_page GC_PROTO((size_t lb));
+
+#if defined(__sgi) && !defined(__GNUC__) && _COMPILER_VERSION >= 720
+# define GC_ADD_CALLER
+# define GC_RETURN_ADDR (GC_word)__return_address
+#endif
+
+#if defined(__linux__) || defined(__GLIBC__)
+# include <features.h>
+# if (__GLIBC__ == 2 && __GLIBC_MINOR__ >= 1 || __GLIBC__ > 2) \
+ && !defined(__ia64__)
+# ifndef GC_HAVE_BUILTIN_BACKTRACE
+# define GC_HAVE_BUILTIN_BACKTRACE
+# endif
+# endif
+# if defined(__i386__) || defined(__x86_64__)
+# define GC_CAN_SAVE_CALL_STACKS
+# endif
+#endif
+
+#if defined(GC_HAVE_BUILTIN_BACKTRACE) && !defined(GC_CAN_SAVE_CALL_STACKS)
+# define GC_CAN_SAVE_CALL_STACKS
+#endif
+
+#if defined(__sparc__)
+# define GC_CAN_SAVE_CALL_STACKS
+#endif
+
+/* If we're on an a platform on which we can't save call stacks, but */
+/* gcc is normally used, we go ahead and define GC_ADD_CALLER. */
+/* We make this decision independent of whether gcc is actually being */
+/* used, in order to keep the interface consistent, and allow mixing */
+/* of compilers. */
+/* This may also be desirable if it is possible but expensive to */
+/* retrieve the call chain. */
+#if (defined(__linux__) || defined(__NetBSD__) || defined(__OpenBSD__) \
+ || defined(__FreeBSD__)) & !defined(GC_CAN_SAVE_CALL_STACKS)
+# define GC_ADD_CALLER
+# if __GNUC__ >= 3 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 95)
+ /* gcc knows how to retrieve return address, but we don't know */
+ /* how to generate call stacks. */
+# define GC_RETURN_ADDR (GC_word)__builtin_return_address(0)
+# else
+ /* Just pass 0 for gcc compatibility. */
+# define GC_RETURN_ADDR 0
+# endif
+#endif
+
+#ifdef GC_ADD_CALLER
+# define GC_EXTRAS GC_RETURN_ADDR, __FILE__, __LINE__
+# define GC_EXTRA_PARAMS GC_word ra, GC_CONST char * s, int i
+#else
+# define GC_EXTRAS __FILE__, __LINE__
+# define GC_EXTRA_PARAMS GC_CONST char * s, int i
+#endif
+
+/* Debugging (annotated) allocation. GC_gcollect will check */
+/* objects allocated in this way for overwrites, etc. */
+GC_API GC_PTR GC_debug_malloc
+ GC_PROTO((size_t size_in_bytes, GC_EXTRA_PARAMS));
+GC_API GC_PTR GC_debug_malloc_atomic
+ GC_PROTO((size_t size_in_bytes, GC_EXTRA_PARAMS));
+GC_API GC_PTR GC_debug_malloc_uncollectable
+ GC_PROTO((size_t size_in_bytes, GC_EXTRA_PARAMS));
+GC_API GC_PTR GC_debug_malloc_stubborn
+ GC_PROTO((size_t size_in_bytes, GC_EXTRA_PARAMS));
+GC_API GC_PTR GC_debug_malloc_ignore_off_page
+ GC_PROTO((size_t size_in_bytes, GC_EXTRA_PARAMS));
+GC_API GC_PTR GC_debug_malloc_atomic_ignore_off_page
+ GC_PROTO((size_t size_in_bytes, GC_EXTRA_PARAMS));
+GC_API void GC_debug_free GC_PROTO((GC_PTR object_addr));
+GC_API GC_PTR GC_debug_realloc
+ GC_PROTO((GC_PTR old_object, size_t new_size_in_bytes,
+ GC_EXTRA_PARAMS));
+GC_API void GC_debug_change_stubborn GC_PROTO((GC_PTR));
+GC_API void GC_debug_end_stubborn_change GC_PROTO((GC_PTR));
+
+/* Routines that allocate objects with debug information (like the */
+/* above), but just fill in dummy file and line number information. */
+/* Thus they can serve as drop-in malloc/realloc replacements. This */
+/* can be useful for two reasons: */
+/* 1) It allows the collector to be built with DBG_HDRS_ALL defined */
+/* even if some allocation calls come from 3rd party libraries */
+/* that can't be recompiled. */
+/* 2) On some platforms, the file and line information is redundant, */
+/* since it can be reconstructed from a stack trace. On such */
+/* platforms it may be more convenient not to recompile, e.g. for */
+/* leak detection. This can be accomplished by instructing the */
+/* linker to replace malloc/realloc with these. */
+GC_API GC_PTR GC_debug_malloc_replacement GC_PROTO((size_t size_in_bytes));
+GC_API GC_PTR GC_debug_realloc_replacement
+ GC_PROTO((GC_PTR object_addr, size_t size_in_bytes));
+
+# ifdef GC_DEBUG
+# define GC_MALLOC(sz) GC_debug_malloc(sz, GC_EXTRAS)
+# define GC_MALLOC_ATOMIC(sz) GC_debug_malloc_atomic(sz, GC_EXTRAS)
+# define GC_MALLOC_UNCOLLECTABLE(sz) \
+ GC_debug_malloc_uncollectable(sz, GC_EXTRAS)
+# define GC_MALLOC_IGNORE_OFF_PAGE(sz) \
+ GC_debug_malloc_ignore_off_page(sz, GC_EXTRAS)
+# define GC_MALLOC_ATOMIC_IGNORE_OFF_PAGE(sz) \
+ GC_debug_malloc_atomic_ignore_off_page(sz, GC_EXTRAS)
+# define GC_REALLOC(old, sz) GC_debug_realloc(old, sz, GC_EXTRAS)
+# define GC_FREE(p) GC_debug_free(p)
+# define GC_REGISTER_FINALIZER(p, f, d, of, od) \
+ GC_debug_register_finalizer(p, f, d, of, od)
+# define GC_REGISTER_FINALIZER_IGNORE_SELF(p, f, d, of, od) \
+ GC_debug_register_finalizer_ignore_self(p, f, d, of, od)
+# define GC_REGISTER_FINALIZER_NO_ORDER(p, f, d, of, od) \
+ GC_debug_register_finalizer_no_order(p, f, d, of, od)
+# define GC_MALLOC_STUBBORN(sz) GC_debug_malloc_stubborn(sz, GC_EXTRAS);
+# define GC_CHANGE_STUBBORN(p) GC_debug_change_stubborn(p)
+# define GC_END_STUBBORN_CHANGE(p) GC_debug_end_stubborn_change(p)
+# define GC_GENERAL_REGISTER_DISAPPEARING_LINK(link, obj) \
+ GC_general_register_disappearing_link(link, GC_base(obj))
+# define GC_REGISTER_DISPLACEMENT(n) GC_debug_register_displacement(n)
+# else
+# define GC_MALLOC(sz) GC_malloc(sz)
+# define GC_MALLOC_ATOMIC(sz) GC_malloc_atomic(sz)
+# define GC_MALLOC_UNCOLLECTABLE(sz) GC_malloc_uncollectable(sz)
+# define GC_MALLOC_IGNORE_OFF_PAGE(sz) \
+ GC_malloc_ignore_off_page(sz)
+# define GC_MALLOC_ATOMIC_IGNORE_OFF_PAGE(sz) \
+ GC_malloc_atomic_ignore_off_page(sz)
+# define GC_REALLOC(old, sz) GC_realloc(old, sz)
+# define GC_FREE(p) GC_free(p)
+# define GC_REGISTER_FINALIZER(p, f, d, of, od) \
+ GC_register_finalizer(p, f, d, of, od)
+# define GC_REGISTER_FINALIZER_IGNORE_SELF(p, f, d, of, od) \
+ GC_register_finalizer_ignore_self(p, f, d, of, od)
+# define GC_REGISTER_FINALIZER_NO_ORDER(p, f, d, of, od) \
+ GC_register_finalizer_no_order(p, f, d, of, od)
+# define GC_MALLOC_STUBBORN(sz) GC_malloc_stubborn(sz)
+# define GC_CHANGE_STUBBORN(p) GC_change_stubborn(p)
+# define GC_END_STUBBORN_CHANGE(p) GC_end_stubborn_change(p)
+# define GC_GENERAL_REGISTER_DISAPPEARING_LINK(link, obj) \
+ GC_general_register_disappearing_link(link, obj)
+# define GC_REGISTER_DISPLACEMENT(n) GC_register_displacement(n)
+# endif
+/* The following are included because they are often convenient, and */
+/* reduce the chance for a misspecifed size argument. But calls may */
+/* expand to something syntactically incorrect if t is a complicated */
+/* type expression. */
+# define GC_NEW(t) (t *)GC_MALLOC(sizeof (t))
+# define GC_NEW_ATOMIC(t) (t *)GC_MALLOC_ATOMIC(sizeof (t))
+# define GC_NEW_STUBBORN(t) (t *)GC_MALLOC_STUBBORN(sizeof (t))
+# define GC_NEW_UNCOLLECTABLE(t) (t *)GC_MALLOC_UNCOLLECTABLE(sizeof (t))
+
+/* Finalization. Some of these primitives are grossly unsafe. */
+/* The idea is to make them both cheap, and sufficient to build */
+/* a safer layer, closer to Modula-3, Java, or PCedar finalization. */
+/* The interface represents my conclusions from a long discussion */
+/* with Alan Demers, Dan Greene, Carl Hauser, Barry Hayes, */
+/* Christian Jacobi, and Russ Atkinson. It's not perfect, and */
+/* probably nobody else agrees with it. Hans-J. Boehm 3/13/92 */
+typedef void (*GC_finalization_proc)
+ GC_PROTO((GC_PTR obj, GC_PTR client_data));
+
+GC_API void GC_register_finalizer
+ GC_PROTO((GC_PTR obj, GC_finalization_proc fn, GC_PTR cd,
+ GC_finalization_proc *ofn, GC_PTR *ocd));
+GC_API void GC_debug_register_finalizer
+ GC_PROTO((GC_PTR obj, GC_finalization_proc fn, GC_PTR cd,
+ GC_finalization_proc *ofn, GC_PTR *ocd));
+ /* When obj is no longer accessible, invoke */
+ /* (*fn)(obj, cd). If a and b are inaccessible, and */
+ /* a points to b (after disappearing links have been */
+ /* made to disappear), then only a will be */
+ /* finalized. (If this does not create any new */
+ /* pointers to b, then b will be finalized after the */
+ /* next collection.) Any finalizable object that */
+ /* is reachable from itself by following one or more */
+ /* pointers will not be finalized (or collected). */
+ /* Thus cycles involving finalizable objects should */
+ /* be avoided, or broken by disappearing links. */
+ /* All but the last finalizer registered for an object */
+ /* is ignored. */
+ /* Finalization may be removed by passing 0 as fn. */
+ /* Finalizers are implicitly unregistered just before */
+ /* they are invoked. */
+ /* The old finalizer and client data are stored in */
+ /* *ofn and *ocd. */
+ /* Fn is never invoked on an accessible object, */
+ /* provided hidden pointers are converted to real */
+ /* pointers only if the allocation lock is held, and */
+ /* such conversions are not performed by finalization */
+ /* routines. */
+ /* If GC_register_finalizer is aborted as a result of */
+ /* a signal, the object may be left with no */
+ /* finalization, even if neither the old nor new */
+ /* finalizer were NULL. */
+ /* Obj should be the nonNULL starting address of an */
+ /* object allocated by GC_malloc or friends. */
+ /* Note that any garbage collectable object referenced */
+ /* by cd will be considered accessible until the */
+ /* finalizer is invoked. */
+
+/* Another versions of the above follow. It ignores */
+/* self-cycles, i.e. pointers from a finalizable object to */
+/* itself. There is a stylistic argument that this is wrong, */
+/* but it's unavoidable for C++, since the compiler may */
+/* silently introduce these. It's also benign in that specific */
+/* case. And it helps if finalizable objects are split to */
+/* avoid cycles. */
+/* Note that cd will still be viewed as accessible, even if it */
+/* refers to the object itself. */
+GC_API void GC_register_finalizer_ignore_self
+ GC_PROTO((GC_PTR obj, GC_finalization_proc fn, GC_PTR cd,
+ GC_finalization_proc *ofn, GC_PTR *ocd));
+GC_API void GC_debug_register_finalizer_ignore_self
+ GC_PROTO((GC_PTR obj, GC_finalization_proc fn, GC_PTR cd,
+ GC_finalization_proc *ofn, GC_PTR *ocd));
+
+/* Another version of the above. It ignores all cycles. */
+/* It should probably only be used by Java implementations. */
+/* Note that cd will still be viewed as accessible, even if it */
+/* refers to the object itself. */
+GC_API void GC_register_finalizer_no_order
+ GC_PROTO((GC_PTR obj, GC_finalization_proc fn, GC_PTR cd,
+ GC_finalization_proc *ofn, GC_PTR *ocd));
+GC_API void GC_debug_register_finalizer_no_order
+ GC_PROTO((GC_PTR obj, GC_finalization_proc fn, GC_PTR cd,
+ GC_finalization_proc *ofn, GC_PTR *ocd));
+
+
+/* The following routine may be used to break cycles between */
+/* finalizable objects, thus causing cyclic finalizable */
+/* objects to be finalized in the correct order. Standard */
+/* use involves calling GC_register_disappearing_link(&p), */
+/* where p is a pointer that is not followed by finalization */
+/* code, and should not be considered in determining */
+/* finalization order. */
+GC_API int GC_register_disappearing_link GC_PROTO((GC_PTR * /* link */));
+ /* Link should point to a field of a heap allocated */
+ /* object obj. *link will be cleared when obj is */
+ /* found to be inaccessible. This happens BEFORE any */
+ /* finalization code is invoked, and BEFORE any */
+ /* decisions about finalization order are made. */
+ /* This is useful in telling the finalizer that */
+ /* some pointers are not essential for proper */
+ /* finalization. This may avoid finalization cycles. */
+ /* Note that obj may be resurrected by another */
+ /* finalizer, and thus the clearing of *link may */
+ /* be visible to non-finalization code. */
+ /* There's an argument that an arbitrary action should */
+ /* be allowed here, instead of just clearing a pointer. */
+ /* But this causes problems if that action alters, or */
+ /* examines connectivity. */
+ /* Returns 1 if link was already registered, 0 */
+ /* otherwise. */
+ /* Only exists for backward compatibility. See below: */
+
+GC_API int GC_general_register_disappearing_link
+ GC_PROTO((GC_PTR * /* link */, GC_PTR obj));
+ /* A slight generalization of the above. *link is */
+ /* cleared when obj first becomes inaccessible. This */
+ /* can be used to implement weak pointers easily and */
+ /* safely. Typically link will point to a location */
+ /* holding a disguised pointer to obj. (A pointer */
+ /* inside an "atomic" object is effectively */
+ /* disguised.) In this way soft */
+ /* pointers are broken before any object */
+ /* reachable from them are finalized. Each link */
+ /* May be registered only once, i.e. with one obj */
+ /* value. This was added after a long email discussion */
+ /* with John Ellis. */
+ /* Obj must be a pointer to the first word of an object */
+ /* we allocated. It is unsafe to explicitly deallocate */
+ /* the object containing link. Explicitly deallocating */
+ /* obj may or may not cause link to eventually be */
+ /* cleared. */
+GC_API int GC_unregister_disappearing_link GC_PROTO((GC_PTR * /* link */));
+ /* Returns 0 if link was not actually registered. */
+ /* Undoes a registration by either of the above two */
+ /* routines. */
+
+/* Returns !=0 if GC_invoke_finalizers has something to do. */
+GC_API int GC_should_invoke_finalizers GC_PROTO((void));
+
+GC_API int GC_invoke_finalizers GC_PROTO((void));
+ /* Run finalizers for all objects that are ready to */
+ /* be finalized. Return the number of finalizers */
+ /* that were run. Normally this is also called */
+ /* implicitly during some allocations. If */
+ /* GC-finalize_on_demand is nonzero, it must be called */
+ /* explicitly. */
+
+/* GC_set_warn_proc can be used to redirect or filter warning messages. */
+/* p may not be a NULL pointer. */
+typedef void (*GC_warn_proc) GC_PROTO((char *msg, GC_word arg));
+GC_API GC_warn_proc GC_set_warn_proc GC_PROTO((GC_warn_proc p));
+ /* Returns old warning procedure. */
+
+GC_API GC_word GC_set_free_space_divisor GC_PROTO((GC_word value));
+ /* Set free_space_divisor. See above for definition. */
+ /* Returns old value. */
+
+/* The following is intended to be used by a higher level */
+/* (e.g. Java-like) finalization facility. It is expected */
+/* that finalization code will arrange for hidden pointers to */
+/* disappear. Otherwise objects can be accessed after they */
+/* have been collected. */
+/* Note that putting pointers in atomic objects or in */
+/* nonpointer slots of "typed" objects is equivalent to */
+/* disguising them in this way, and may have other advantages. */
+# if defined(I_HIDE_POINTERS) || defined(GC_I_HIDE_POINTERS)
+ typedef GC_word GC_hidden_pointer;
+# define HIDE_POINTER(p) (~(GC_hidden_pointer)(p))
+# define REVEAL_POINTER(p) ((GC_PTR)(HIDE_POINTER(p)))
+ /* Converting a hidden pointer to a real pointer requires verifying */
+ /* that the object still exists. This involves acquiring the */
+ /* allocator lock to avoid a race with the collector. */
+# endif /* I_HIDE_POINTERS */
+
+typedef GC_PTR (*GC_fn_type) GC_PROTO((GC_PTR client_data));
+GC_API GC_PTR GC_call_with_alloc_lock
+ GC_PROTO((GC_fn_type fn, GC_PTR client_data));
+
+/* The following routines are primarily intended for use with a */
+/* preprocessor which inserts calls to check C pointer arithmetic. */
+/* They indicate failure by invoking the corresponding _print_proc. */
+
+/* Check that p and q point to the same object. */
+/* Fail conspicuously if they don't. */
+/* Returns the first argument. */
+/* Succeeds if neither p nor q points to the heap. */
+/* May succeed if both p and q point to between heap objects. */
+GC_API GC_PTR GC_same_obj GC_PROTO((GC_PTR p, GC_PTR q));
+
+/* Checked pointer pre- and post- increment operations. Note that */
+/* the second argument is in units of bytes, not multiples of the */
+/* object size. This should either be invoked from a macro, or the */
+/* call should be automatically generated. */
+GC_API GC_PTR GC_pre_incr GC_PROTO((GC_PTR *p, size_t how_much));
+GC_API GC_PTR GC_post_incr GC_PROTO((GC_PTR *p, size_t how_much));
+
+/* Check that p is visible */
+/* to the collector as a possibly pointer containing location. */
+/* If it isn't fail conspicuously. */
+/* Returns the argument in all cases. May erroneously succeed */
+/* in hard cases. (This is intended for debugging use with */
+/* untyped allocations. The idea is that it should be possible, though */
+/* slow, to add such a call to all indirect pointer stores.) */
+/* Currently useless for multithreaded worlds. */
+GC_API GC_PTR GC_is_visible GC_PROTO((GC_PTR p));
+
+/* Check that if p is a pointer to a heap page, then it points to */
+/* a valid displacement within a heap object. */
+/* Fail conspicuously if this property does not hold. */
+/* Uninteresting with GC_all_interior_pointers. */
+/* Always returns its argument. */
+GC_API GC_PTR GC_is_valid_displacement GC_PROTO((GC_PTR p));
+
+/* Safer, but slow, pointer addition. Probably useful mainly with */
+/* a preprocessor. Useful only for heap pointers. */
+#ifdef GC_DEBUG
+# define GC_PTR_ADD3(x, n, type_of_result) \
+ ((type_of_result)GC_same_obj((x)+(n), (x)))
+# define GC_PRE_INCR3(x, n, type_of_result) \
+ ((type_of_result)GC_pre_incr(&(x), (n)*sizeof(*x))
+# define GC_POST_INCR2(x, type_of_result) \
+ ((type_of_result)GC_post_incr(&(x), sizeof(*x))
+# ifdef __GNUC__
+# define GC_PTR_ADD(x, n) \
+ GC_PTR_ADD3(x, n, typeof(x))
+# define GC_PRE_INCR(x, n) \
+ GC_PRE_INCR3(x, n, typeof(x))
+# define GC_POST_INCR(x, n) \
+ GC_POST_INCR3(x, typeof(x))
+# else
+ /* We can't do this right without typeof, which ANSI */
+ /* decided was not sufficiently useful. Repeatedly */
+ /* mentioning the arguments seems too dangerous to be */
+ /* useful. So does not casting the result. */
+# define GC_PTR_ADD(x, n) ((x)+(n))
+# endif
+#else /* !GC_DEBUG */
+# define GC_PTR_ADD3(x, n, type_of_result) ((x)+(n))
+# define GC_PTR_ADD(x, n) ((x)+(n))
+# define GC_PRE_INCR3(x, n, type_of_result) ((x) += (n))
+# define GC_PRE_INCR(x, n) ((x) += (n))
+# define GC_POST_INCR2(x, n, type_of_result) ((x)++)
+# define GC_POST_INCR(x, n) ((x)++)
+#endif
+
+/* Safer assignment of a pointer to a nonstack location. */
+#ifdef GC_DEBUG
+# if defined(__STDC__) || defined(_AIX)
+# define GC_PTR_STORE(p, q) \
+ (*(void **)GC_is_visible(p) = GC_is_valid_displacement(q))
+# else
+# define GC_PTR_STORE(p, q) \
+ (*(char **)GC_is_visible(p) = GC_is_valid_displacement(q))
+# endif
+#else /* !GC_DEBUG */
+# define GC_PTR_STORE(p, q) *((p) = (q))
+#endif
+
+/* Functions called to report pointer checking errors */
+GC_API void (*GC_same_obj_print_proc) GC_PROTO((GC_PTR p, GC_PTR q));
+
+GC_API void (*GC_is_valid_displacement_print_proc)
+ GC_PROTO((GC_PTR p));
+
+GC_API void (*GC_is_visible_print_proc)
+ GC_PROTO((GC_PTR p));
+
+
+/* For pthread support, we generally need to intercept a number of */
+/* thread library calls. We do that here by macro defining them. */
+
+#if !defined(GC_USE_LD_WRAP) && \
+ (defined(GC_PTHREADS) || defined(GC_SOLARIS_THREADS))
+# include "gc_pthread_redirects.h"
+#endif
+
+# if defined(PCR) || defined(GC_SOLARIS_THREADS) || \
+ defined(GC_PTHREADS) || defined(GC_WIN32_THREADS)
+ /* Any flavor of threads except SRC_M3. */
+
+/* Register the current thread as a new thread whose stack(s) should */
+/* be traced by the GC. */
+/* If a platform does not implicitly do so, this must be called before */
+/* a thread can allocate garbage collected memory, or assign pointers */
+/* to the garbage collected heap. Once registered, a thread will be */
+/* stopped during garbage collections. */
+GC_API void GC_register_my_thread GC_PROTO((void));
+
+/* Register the current thread, with the indicated stack base, as */
+/* a new thread whose stack(s) should be traced by the GC. If a */
+/* platform does not implicitly do so, this must be called before a */
+/* thread can allocate garbage collected memory, or assign pointers */
+/* to the garbage collected heap. Once registered, a thread will be */
+/* stopped during garbage collections. */
+GC_API void GC_unregister_my_thread GC_PROTO((void));
+
+GC_API GC_PTR GC_get_thread_stack_base GC_PROTO((void));
+
+/* This returns a list of objects, linked through their first */
+/* word. Its use can greatly reduce lock contention problems, since */
+/* the allocation lock can be acquired and released many fewer times. */
+/* lb must be large enough to hold the pointer field. */
+/* It is used internally by gc_local_alloc.h, which provides a simpler */
+/* programming interface on Linux. */
+GC_PTR GC_malloc_many(size_t lb);
+#define GC_NEXT(p) (*(GC_PTR *)(p)) /* Retrieve the next element */
+ /* in returned list. */
+extern void GC_thr_init GC_PROTO((void));/* Needed for Solaris/X86 */
+
+#endif /* THREADS && !SRC_M3 */
+
+/* Register a callback to control the scanning of dynamic libraries.
+ When the GC scans the static data of a dynamic library, it will
+ first call a user-supplied routine with filename of the library and
+ the address and length of the memory region. This routine should
+ return nonzero if that region should be scanned. */
+GC_API void GC_register_has_static_roots_callback
+ (int (*callback)(const char *, void *, size_t));
+
+
+#if defined(GC_WIN32_THREADS) && !defined(__CYGWIN32__) && !defined(__CYGWIN__)
+# include <windows.h>
+
+ /*
+ * All threads must be created using GC_CreateThread, so that they will be
+ * recorded in the thread table. For backwards compatibility, this is not
+ * technically true if the GC is built as a dynamic library, since it can
+ * and does then use DllMain to keep track of thread creations. But new code
+ * should be built to call GC_CreateThread.
+ */
+ GC_API HANDLE WINAPI GC_CreateThread(
+ LPSECURITY_ATTRIBUTES lpThreadAttributes,
+ DWORD dwStackSize, LPTHREAD_START_ROUTINE lpStartAddress,
+ LPVOID lpParameter, DWORD dwCreationFlags, LPDWORD lpThreadId );
+
+# if defined(_WIN32_WCE)
+ /*
+ * win32_threads.c implements the real WinMain, which will start a new thread
+ * to call GC_WinMain after initializing the garbage collector.
+ */
+ int WINAPI GC_WinMain(
+ HINSTANCE hInstance,
+ HINSTANCE hPrevInstance,
+ LPWSTR lpCmdLine,
+ int nCmdShow );
+
+# ifndef GC_BUILD
+# define WinMain GC_WinMain
+# define CreateThread GC_CreateThread
+# endif
+# endif /* defined(_WIN32_WCE) */
+
+#endif /* defined(GC_WIN32_THREADS) && !cygwin */
+
+ /*
+ * Fully portable code should call GC_INIT() from the main program
+ * before making any other GC_ calls. On most platforms this is a
+ * no-op and the collector self-initializes. But a number of platforms
+ * make that too hard.
+ */
+#if (defined(sparc) || defined(__sparc)) && defined(sun)
+ /*
+ * If you are planning on putting
+ * the collector in a SunOS 5 dynamic library, you need to call GC_INIT()
+ * from the statically loaded program section.
+ * This circumvents a Solaris 2.X (X<=4) linker bug.
+ */
+# define GC_INIT() { extern end, etext; \
+ GC_noop(&end, &etext); }
+#else
+# if defined(__CYGWIN32__) || defined (_AIX)
+ /*
+ * Similarly gnu-win32 DLLs need explicit initialization from
+ * the main program, as does AIX.
+ */
+# ifdef __CYGWIN32__
+ extern int _data_start__[];
+ extern int _data_end__[];
+ extern int _bss_start__[];
+ extern int _bss_end__[];
+# define GC_MAX(x,y) ((x) > (y) ? (x) : (y))
+# define GC_MIN(x,y) ((x) < (y) ? (x) : (y))
+# define GC_DATASTART ((GC_PTR) GC_MIN(_data_start__, _bss_start__))
+# define GC_DATAEND ((GC_PTR) GC_MAX(_data_end__, _bss_end__))
+# ifdef GC_DLL
+# define GC_INIT() { GC_add_roots(GC_DATASTART, GC_DATAEND); }
+# else
+# define GC_INIT()
+# endif
+# endif
+# if defined(_AIX)
+ extern int _data[], _end[];
+# define GC_DATASTART ((GC_PTR)((ulong)_data))
+# define GC_DATAEND ((GC_PTR)((ulong)_end))
+# define GC_INIT() { GC_add_roots(GC_DATASTART, GC_DATAEND); }
+# endif
+# else
+# if defined(__APPLE__) && defined(__MACH__) || defined(GC_WIN32_THREADS)
+# define GC_INIT() { GC_init(); }
+# else
+# define GC_INIT()
+# endif /* !__MACH && !GC_WIN32_THREADS */
+# endif /* !AIX && !cygwin */
+#endif /* !sparc */
+
+#if !defined(_WIN32_WCE) \
+ && ((defined(_MSDOS) || defined(_MSC_VER)) && (_M_IX86 >= 300) \
+ || defined(_WIN32) && !defined(__CYGWIN32__) && !defined(__CYGWIN__))
+ /* win32S may not free all resources on process exit. */
+ /* This explicitly deallocates the heap. */
+ GC_API void GC_win32_free_heap ();
+#endif
+
+#if ( defined(_AMIGA) && !defined(GC_AMIGA_MAKINGLIB) )
+ /* Allocation really goes through GC_amiga_allocwrapper_do */
+# include "gc_amiga_redirects.h"
+#endif
+
+#if defined(GC_REDIRECT_TO_LOCAL) && !defined(GC_LOCAL_ALLOC_H)
+# include "gc_local_alloc.h"
+#endif
+
+#ifdef __cplusplus
+ } /* end of extern "C" */
+#endif
+
+/* External thread suspension support. These functions do not implement
+ * suspension counts or any other higher-level abstraction. Threads which
+ * have been suspended numerous times will resume with the very first call
+ * to GC_resume_thread.
+ */
+#if defined(GC_PTHREADS) && !defined(GC_SOLARIS_THREADS) \
+ && !defined(GC_WIN32_THREADS) && !defined(GC_DARWIN_THREADS)
+GC_API void GC_suspend_thread GC_PROTO((pthread_t));
+GC_API void GC_resume_thread GC_PROTO((pthread_t));
+#endif
+#endif /* _GC_H */
Added: llvm-gcc-4.2/trunk/boehm-gc/include/gc_alloc.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/gc_alloc.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/gc_alloc.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/gc_alloc.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,383 @@
+/*
+ * Copyright (c) 1996-1998 by Silicon Graphics. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ */
+
+//
+// This is a C++ header file that is intended to replace the SGI STL
+// alloc.h. This assumes SGI STL version < 3.0.
+//
+// This assumes the collector has been compiled with -DATOMIC_UNCOLLECTABLE
+// and -DALL_INTERIOR_POINTERS. We also recommend
+// -DREDIRECT_MALLOC=GC_uncollectable_malloc.
+//
+// Some of this could be faster in the explicit deallocation case. In particular,
+// we spend too much time clearing objects on the free lists. That could be avoided.
+//
+// This uses template classes with static members, and hence does not work
+// with g++ 2.7.2 and earlier.
+//
+// This code assumes that the collector itself has been compiled with a
+// compiler that defines __STDC__ .
+//
+
+#include "gc.h"
+
+#ifndef GC_ALLOC_H
+
+#define GC_ALLOC_H
+#define __ALLOC_H // Prevent inclusion of the default version. Ugly.
+#define __SGI_STL_ALLOC_H
+#define __SGI_STL_INTERNAL_ALLOC_H
+
+#ifndef __ALLOC
+# define __ALLOC alloc
+#endif
+
+#include <stddef.h>
+#include <string.h>
+
+// The following is just replicated from the conventional SGI alloc.h:
+
+template<class T, class alloc>
+class simple_alloc {
+
+public:
+ static T *allocate(size_t n)
+ { return 0 == n? 0 : (T*) alloc::allocate(n * sizeof (T)); }
+ static T *allocate(void)
+ { return (T*) alloc::allocate(sizeof (T)); }
+ static void deallocate(T *p, size_t n)
+ { if (0 != n) alloc::deallocate(p, n * sizeof (T)); }
+ static void deallocate(T *p)
+ { alloc::deallocate(p, sizeof (T)); }
+};
+
+#include "gc.h"
+
+// The following need to match collector data structures.
+// We can't include gc_priv.h, since that pulls in way too much stuff.
+// This should eventually be factored out into another include file.
+
+extern "C" {
+ extern void ** const GC_objfreelist_ptr;
+ extern void ** const GC_aobjfreelist_ptr;
+ extern void ** const GC_uobjfreelist_ptr;
+ extern void ** const GC_auobjfreelist_ptr;
+
+ extern void GC_incr_words_allocd(size_t words);
+ extern void GC_incr_mem_freed(size_t words);
+
+ extern char * GC_generic_malloc_words_small(size_t word, int kind);
+}
+
+// Object kinds; must match PTRFREE, NORMAL, UNCOLLECTABLE, and
+// AUNCOLLECTABLE in gc_priv.h.
+
+enum { GC_PTRFREE = 0, GC_NORMAL = 1, GC_UNCOLLECTABLE = 2,
+ GC_AUNCOLLECTABLE = 3 };
+
+enum { GC_max_fast_bytes = 255 };
+
+enum { GC_bytes_per_word = sizeof(char *) };
+
+enum { GC_byte_alignment = 8 };
+
+enum { GC_word_alignment = GC_byte_alignment/GC_bytes_per_word };
+
+inline void * &GC_obj_link(void * p)
+{ return *(void **)p; }
+
+// Compute a number of words >= n+1 bytes.
+// The +1 allows for pointers one past the end.
+inline size_t GC_round_up(size_t n)
+{
+ return ((n + GC_byte_alignment)/GC_byte_alignment)*GC_word_alignment;
+}
+
+// The same but don't allow for extra byte.
+inline size_t GC_round_up_uncollectable(size_t n)
+{
+ return ((n + GC_byte_alignment - 1)/GC_byte_alignment)*GC_word_alignment;
+}
+
+template <int dummy>
+class GC_aux_template {
+public:
+ // File local count of allocated words. Occasionally this is
+ // added into the global count. A separate count is necessary since the
+ // real one must be updated with a procedure call.
+ static size_t GC_words_recently_allocd;
+
+ // Same for uncollectable mmory. Not yet reflected in either
+ // GC_words_recently_allocd or GC_non_gc_bytes.
+ static size_t GC_uncollectable_words_recently_allocd;
+
+ // Similar counter for explicitly deallocated memory.
+ static size_t GC_mem_recently_freed;
+
+ // Again for uncollectable memory.
+ static size_t GC_uncollectable_mem_recently_freed;
+
+ static void * GC_out_of_line_malloc(size_t nwords, int kind);
+};
+
+template <int dummy>
+size_t GC_aux_template<dummy>::GC_words_recently_allocd = 0;
+
+template <int dummy>
+size_t GC_aux_template<dummy>::GC_uncollectable_words_recently_allocd = 0;
+
+template <int dummy>
+size_t GC_aux_template<dummy>::GC_mem_recently_freed = 0;
+
+template <int dummy>
+size_t GC_aux_template<dummy>::GC_uncollectable_mem_recently_freed = 0;
+
+template <int dummy>
+void * GC_aux_template<dummy>::GC_out_of_line_malloc(size_t nwords, int kind)
+{
+ GC_words_recently_allocd += GC_uncollectable_words_recently_allocd;
+ GC_non_gc_bytes +=
+ GC_bytes_per_word * GC_uncollectable_words_recently_allocd;
+ GC_uncollectable_words_recently_allocd = 0;
+
+ GC_mem_recently_freed += GC_uncollectable_mem_recently_freed;
+ GC_non_gc_bytes -=
+ GC_bytes_per_word * GC_uncollectable_mem_recently_freed;
+ GC_uncollectable_mem_recently_freed = 0;
+
+ GC_incr_words_allocd(GC_words_recently_allocd);
+ GC_words_recently_allocd = 0;
+
+ GC_incr_mem_freed(GC_mem_recently_freed);
+ GC_mem_recently_freed = 0;
+
+ return GC_generic_malloc_words_small(nwords, kind);
+}
+
+typedef GC_aux_template<0> GC_aux;
+
+// A fast, single-threaded, garbage-collected allocator
+// We assume the first word will be immediately overwritten.
+// In this version, deallocation is not a noop, and explicit
+// deallocation is likely to help performance.
+template <int dummy>
+class single_client_gc_alloc_template {
+ public:
+ static void * allocate(size_t n)
+ {
+ size_t nwords = GC_round_up(n);
+ void ** flh;
+ void * op;
+
+ if (n > GC_max_fast_bytes) return GC_malloc(n);
+ flh = GC_objfreelist_ptr + nwords;
+ if (0 == (op = *flh)) {
+ return GC_aux::GC_out_of_line_malloc(nwords, GC_NORMAL);
+ }
+ *flh = GC_obj_link(op);
+ GC_aux::GC_words_recently_allocd += nwords;
+ return op;
+ }
+ static void * ptr_free_allocate(size_t n)
+ {
+ size_t nwords = GC_round_up(n);
+ void ** flh;
+ void * op;
+
+ if (n > GC_max_fast_bytes) return GC_malloc_atomic(n);
+ flh = GC_aobjfreelist_ptr + nwords;
+ if (0 == (op = *flh)) {
+ return GC_aux::GC_out_of_line_malloc(nwords, GC_PTRFREE);
+ }
+ *flh = GC_obj_link(op);
+ GC_aux::GC_words_recently_allocd += nwords;
+ return op;
+ }
+ static void deallocate(void *p, size_t n)
+ {
+ size_t nwords = GC_round_up(n);
+ void ** flh;
+
+ if (n > GC_max_fast_bytes) {
+ GC_free(p);
+ } else {
+ flh = GC_objfreelist_ptr + nwords;
+ GC_obj_link(p) = *flh;
+ memset((char *)p + GC_bytes_per_word, 0,
+ GC_bytes_per_word * (nwords - 1));
+ *flh = p;
+ GC_aux::GC_mem_recently_freed += nwords;
+ }
+ }
+ static void ptr_free_deallocate(void *p, size_t n)
+ {
+ size_t nwords = GC_round_up(n);
+ void ** flh;
+
+ if (n > GC_max_fast_bytes) {
+ GC_free(p);
+ } else {
+ flh = GC_aobjfreelist_ptr + nwords;
+ GC_obj_link(p) = *flh;
+ *flh = p;
+ GC_aux::GC_mem_recently_freed += nwords;
+ }
+ }
+};
+
+typedef single_client_gc_alloc_template<0> single_client_gc_alloc;
+
+// Once more, for uncollectable objects.
+template <int dummy>
+class single_client_alloc_template {
+ public:
+ static void * allocate(size_t n)
+ {
+ size_t nwords = GC_round_up_uncollectable(n);
+ void ** flh;
+ void * op;
+
+ if (n > GC_max_fast_bytes) return GC_malloc_uncollectable(n);
+ flh = GC_uobjfreelist_ptr + nwords;
+ if (0 == (op = *flh)) {
+ return GC_aux::GC_out_of_line_malloc(nwords, GC_UNCOLLECTABLE);
+ }
+ *flh = GC_obj_link(op);
+ GC_aux::GC_uncollectable_words_recently_allocd += nwords;
+ return op;
+ }
+ static void * ptr_free_allocate(size_t n)
+ {
+ size_t nwords = GC_round_up_uncollectable(n);
+ void ** flh;
+ void * op;
+
+ if (n > GC_max_fast_bytes) return GC_malloc_atomic_uncollectable(n);
+ flh = GC_auobjfreelist_ptr + nwords;
+ if (0 == (op = *flh)) {
+ return GC_aux::GC_out_of_line_malloc(nwords, GC_AUNCOLLECTABLE);
+ }
+ *flh = GC_obj_link(op);
+ GC_aux::GC_uncollectable_words_recently_allocd += nwords;
+ return op;
+ }
+ static void deallocate(void *p, size_t n)
+ {
+ size_t nwords = GC_round_up_uncollectable(n);
+ void ** flh;
+
+ if (n > GC_max_fast_bytes) {
+ GC_free(p);
+ } else {
+ flh = GC_uobjfreelist_ptr + nwords;
+ GC_obj_link(p) = *flh;
+ *flh = p;
+ GC_aux::GC_uncollectable_mem_recently_freed += nwords;
+ }
+ }
+ static void ptr_free_deallocate(void *p, size_t n)
+ {
+ size_t nwords = GC_round_up_uncollectable(n);
+ void ** flh;
+
+ if (n > GC_max_fast_bytes) {
+ GC_free(p);
+ } else {
+ flh = GC_auobjfreelist_ptr + nwords;
+ GC_obj_link(p) = *flh;
+ *flh = p;
+ GC_aux::GC_uncollectable_mem_recently_freed += nwords;
+ }
+ }
+};
+
+typedef single_client_alloc_template<0> single_client_alloc;
+
+template < int dummy >
+class gc_alloc_template {
+ public:
+ static void * allocate(size_t n) { return GC_malloc(n); }
+ static void * ptr_free_allocate(size_t n)
+ { return GC_malloc_atomic(n); }
+ static void deallocate(void *, size_t) { }
+ static void ptr_free_deallocate(void *, size_t) { }
+};
+
+typedef gc_alloc_template < 0 > gc_alloc;
+
+template < int dummy >
+class alloc_template {
+ public:
+ static void * allocate(size_t n) { return GC_malloc_uncollectable(n); }
+ static void * ptr_free_allocate(size_t n)
+ { return GC_malloc_atomic_uncollectable(n); }
+ static void deallocate(void *p, size_t) { GC_free(p); }
+ static void ptr_free_deallocate(void *p, size_t) { GC_free(p); }
+};
+
+typedef alloc_template < 0 > alloc;
+
+#ifdef _SGI_SOURCE
+
+// We want to specialize simple_alloc so that it does the right thing
+// for all pointerfree types. At the moment there is no portable way to
+// even approximate that. The following approximation should work for
+// SGI compilers, and perhaps some others.
+
+# define __GC_SPECIALIZE(T,alloc) \
+class simple_alloc<T, alloc> { \
+public: \
+ static T *allocate(size_t n) \
+ { return 0 == n? 0 : \
+ (T*) alloc::ptr_free_allocate(n * sizeof (T)); } \
+ static T *allocate(void) \
+ { return (T*) alloc::ptr_free_allocate(sizeof (T)); } \
+ static void deallocate(T *p, size_t n) \
+ { if (0 != n) alloc::ptr_free_deallocate(p, n * sizeof (T)); } \
+ static void deallocate(T *p) \
+ { alloc::ptr_free_deallocate(p, sizeof (T)); } \
+};
+
+__GC_SPECIALIZE(char, gc_alloc)
+__GC_SPECIALIZE(int, gc_alloc)
+__GC_SPECIALIZE(unsigned, gc_alloc)
+__GC_SPECIALIZE(float, gc_alloc)
+__GC_SPECIALIZE(double, gc_alloc)
+
+__GC_SPECIALIZE(char, alloc)
+__GC_SPECIALIZE(int, alloc)
+__GC_SPECIALIZE(unsigned, alloc)
+__GC_SPECIALIZE(float, alloc)
+__GC_SPECIALIZE(double, alloc)
+
+__GC_SPECIALIZE(char, single_client_gc_alloc)
+__GC_SPECIALIZE(int, single_client_gc_alloc)
+__GC_SPECIALIZE(unsigned, single_client_gc_alloc)
+__GC_SPECIALIZE(float, single_client_gc_alloc)
+__GC_SPECIALIZE(double, single_client_gc_alloc)
+
+__GC_SPECIALIZE(char, single_client_alloc)
+__GC_SPECIALIZE(int, single_client_alloc)
+__GC_SPECIALIZE(unsigned, single_client_alloc)
+__GC_SPECIALIZE(float, single_client_alloc)
+__GC_SPECIALIZE(double, single_client_alloc)
+
+#ifdef __STL_USE_STD_ALLOCATORS
+
+???copy stuff from stl_alloc.h or remove it to a different file ???
+
+#endif /* __STL_USE_STD_ALLOCATORS */
+
+#endif /* _SGI_SOURCE */
+
+#endif /* GC_ALLOC_H */
Added: llvm-gcc-4.2/trunk/boehm-gc/include/gc_allocator.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/gc_allocator.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/gc_allocator.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/gc_allocator.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,243 @@
+/*
+ * Copyright (c) 1996-1997
+ * Silicon Graphics Computer Systems, Inc.
+ *
+ * Permission to use, copy, modify, distribute and sell this software
+ * and its documentation for any purpose is hereby granted without fee,
+ * provided that the above copyright notice appear in all copies and
+ * that both that copyright notice and this permission notice appear
+ * in supporting documentation. Silicon Graphics makes no
+ * representations about the suitability of this software for any
+ * purpose. It is provided "as is" without express or implied warranty.
+ *
+ * Copyright (c) 2002
+ * Hewlett-Packard Company
+ *
+ * Permission to use, copy, modify, distribute and sell this software
+ * and its documentation for any purpose is hereby granted without fee,
+ * provided that the above copyright notice appear in all copies and
+ * that both that copyright notice and this permission notice appear
+ * in supporting documentation. Hewlett-Packard Company makes no
+ * representations about the suitability of this software for any
+ * purpose. It is provided "as is" without express or implied warranty.
+ */
+
+/*
+ * This implements standard-conforming allocators that interact with
+ * the garbage collector. Gc_alloctor<T> allocates garbage-collectable
+ * objects of type T. Traceable_allocator<T> allocates objects that
+ * are not temselves garbage collected, but are scanned by the
+ * collector for pointers to collectable objects. Traceable_alloc
+ * should be used for explicitly managed STL containers that may
+ * point to collectable objects.
+ *
+ * This code was derived from an earlier version of the GNU C++ standard
+ * library, which itself was derived from the SGI STL implementation.
+ */
+
+#ifndef GC_ALLOCATOR_H
+
+#define GC_ALLOCATOR_H
+
+#include "gc.h"
+
+#if defined(__GNUC__)
+# define GC_ATTR_UNUSED __attribute__((unused))
+#else
+# define GC_ATTR_UNUSED
+#endif
+
+/* First some helpers to allow us to dispatch on whether or not a type
+ * is known to be pointerfree.
+ * These are private, except that the client may invoke the
+ * GC_DECLARE_PTRFREE macro.
+ */
+
+struct GC_true_type {};
+struct GC_false_type {};
+
+template <class GC_tp>
+struct GC_type_traits {
+ GC_false_type GC_is_ptr_free;
+};
+
+# define GC_DECLARE_PTRFREE(T) \
+template<> struct GC_type_traits<T> { GC_true_type GC_is_ptr_free; }
+
+GC_DECLARE_PTRFREE(signed char);
+GC_DECLARE_PTRFREE(unsigned char);
+GC_DECLARE_PTRFREE(signed short);
+GC_DECLARE_PTRFREE(unsigned short);
+GC_DECLARE_PTRFREE(signed int);
+GC_DECLARE_PTRFREE(unsigned int);
+GC_DECLARE_PTRFREE(signed long);
+GC_DECLARE_PTRFREE(unsigned long);
+GC_DECLARE_PTRFREE(float);
+GC_DECLARE_PTRFREE(double);
+/* The client may want to add others. */
+
+// In the following GC_Tp is GC_true_type iff we are allocating a
+// pointerfree object.
+template <class GC_Tp>
+inline void * GC_selective_alloc(size_t n, GC_Tp) {
+ return GC_MALLOC(n);
+}
+
+template <>
+inline void * GC_selective_alloc<GC_true_type>(size_t n, GC_true_type) {
+ return GC_MALLOC_ATOMIC(n);
+}
+
+/* Now the public gc_allocator<T> class:
+ */
+template <class GC_Tp>
+class gc_allocator {
+public:
+ typedef size_t size_type;
+ typedef ptrdiff_t difference_type;
+ typedef GC_Tp* pointer;
+ typedef const GC_Tp* const_pointer;
+ typedef GC_Tp& reference;
+ typedef const GC_Tp& const_reference;
+ typedef GC_Tp value_type;
+
+ template <class GC_Tp1> struct rebind {
+ typedef gc_allocator<GC_Tp1> other;
+ };
+
+ gc_allocator() {}
+# ifndef _MSC_VER
+ // I'm not sure why this is needed here in addition to the following.
+ // The standard specifies it for the standard allocator, but VC++ rejects
+ // it. -HB
+ gc_allocator(const gc_allocator&) throw() {}
+# endif
+ template <class GC_Tp1> gc_allocator(const gc_allocator<GC_Tp1>&) throw() {}
+ ~gc_allocator() throw() {}
+
+ pointer address(reference GC_x) const { return &GC_x; }
+ const_pointer address(const_reference GC_x) const { return &GC_x; }
+
+ // GC_n is permitted to be 0. The C++ standard says nothing about what
+ // the return value is when GC_n == 0.
+ GC_Tp* allocate(size_type GC_n, const void* = 0) {
+ GC_type_traits<GC_Tp> traits;
+ return static_cast<GC_Tp *>
+ (GC_selective_alloc(GC_n * sizeof(GC_Tp),
+ traits.GC_is_ptr_free));
+ }
+
+ // __p is not permitted to be a null pointer.
+ void deallocate(pointer __p, size_type GC_ATTR_UNUSED GC_n)
+ { GC_FREE(__p); }
+
+ size_type max_size() const throw()
+ { return size_t(-1) / sizeof(GC_Tp); }
+
+ void construct(pointer __p, const GC_Tp& __val) { new(__p) GC_Tp(__val); }
+ void destroy(pointer __p) { __p->~GC_Tp(); }
+};
+
+template<>
+class gc_allocator<void> {
+ typedef size_t size_type;
+ typedef ptrdiff_t difference_type;
+ typedef void* pointer;
+ typedef const void* const_pointer;
+ typedef void value_type;
+
+ template <class GC_Tp1> struct rebind {
+ typedef gc_allocator<GC_Tp1> other;
+ };
+};
+
+
+template <class GC_T1, class GC_T2>
+inline bool operator==(const gc_allocator<GC_T1>&, const gc_allocator<GC_T2>&)
+{
+ return true;
+}
+
+template <class GC_T1, class GC_T2>
+inline bool operator!=(const gc_allocator<GC_T1>&, const gc_allocator<GC_T2>&)
+{
+ return false;
+}
+
+/*
+ * And the public traceable_allocator class.
+ */
+
+// Note that we currently don't specialize the pointer-free case, since a
+// pointer-free traceable container doesn't make that much sense,
+// though it could become an issue due to abstraction boundaries.
+template <class GC_Tp>
+class traceable_allocator {
+public:
+ typedef size_t size_type;
+ typedef ptrdiff_t difference_type;
+ typedef GC_Tp* pointer;
+ typedef const GC_Tp* const_pointer;
+ typedef GC_Tp& reference;
+ typedef const GC_Tp& const_reference;
+ typedef GC_Tp value_type;
+
+ template <class GC_Tp1> struct rebind {
+ typedef traceable_allocator<GC_Tp1> other;
+ };
+
+ traceable_allocator() throw() {}
+# ifndef _MSC_VER
+ traceable_allocator(const traceable_allocator&) throw() {}
+# endif
+ template <class GC_Tp1> traceable_allocator
+ (const traceable_allocator<GC_Tp1>&) throw() {}
+ ~traceable_allocator() throw() {}
+
+ pointer address(reference GC_x) const { return &GC_x; }
+ const_pointer address(const_reference GC_x) const { return &GC_x; }
+
+ // GC_n is permitted to be 0. The C++ standard says nothing about what
+ // the return value is when GC_n == 0.
+ GC_Tp* allocate(size_type GC_n, const void* = 0) {
+ return static_cast<GC_Tp*>(GC_MALLOC_UNCOLLECTABLE(GC_n * sizeof(GC_Tp)));
+ }
+
+ // __p is not permitted to be a null pointer.
+ void deallocate(pointer __p, size_type GC_ATTR_UNUSED GC_n)
+ { GC_FREE(__p); }
+
+ size_type max_size() const throw()
+ { return size_t(-1) / sizeof(GC_Tp); }
+
+ void construct(pointer __p, const GC_Tp& __val) { new(__p) GC_Tp(__val); }
+ void destroy(pointer __p) { __p->~GC_Tp(); }
+};
+
+template<>
+class traceable_allocator<void> {
+ typedef size_t size_type;
+ typedef ptrdiff_t difference_type;
+ typedef void* pointer;
+ typedef const void* const_pointer;
+ typedef void value_type;
+
+ template <class GC_Tp1> struct rebind {
+ typedef traceable_allocator<GC_Tp1> other;
+ };
+};
+
+
+template <class GC_T1, class GC_T2>
+inline bool operator==(const traceable_allocator<GC_T1>&, const traceable_allocator<GC_T2>&)
+{
+ return true;
+}
+
+template <class GC_T1, class GC_T2>
+inline bool operator!=(const traceable_allocator<GC_T1>&, const traceable_allocator<GC_T2>&)
+{
+ return false;
+}
+
+#endif /* GC_ALLOCATOR_H */
Added: llvm-gcc-4.2/trunk/boehm-gc/include/gc_amiga_redirects.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/gc_amiga_redirects.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/gc_amiga_redirects.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/gc_amiga_redirects.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,30 @@
+#ifndef GC_AMIGA_REDIRECTS_H
+
+# define GC_AMIGA_REDIRECTS_H
+
+# if ( defined(_AMIGA) && !defined(GC_AMIGA_MAKINGLIB) )
+ extern void *GC_amiga_realloc(void *old_object,size_t new_size_in_bytes);
+# define GC_realloc(a,b) GC_amiga_realloc(a,b)
+ extern void GC_amiga_set_toany(void (*func)(void));
+ extern int GC_amiga_free_space_divisor_inc;
+ extern void *(*GC_amiga_allocwrapper_do) \
+ (size_t size,void *(*AllocFunction)(size_t size2));
+# define GC_malloc(a) \
+ (*GC_amiga_allocwrapper_do)(a,GC_malloc)
+# define GC_malloc_atomic(a) \
+ (*GC_amiga_allocwrapper_do)(a,GC_malloc_atomic)
+# define GC_malloc_uncollectable(a) \
+ (*GC_amiga_allocwrapper_do)(a,GC_malloc_uncollectable)
+# define GC_malloc_stubborn(a) \
+ (*GC_amiga_allocwrapper_do)(a,GC_malloc_stubborn)
+# define GC_malloc_atomic_uncollectable(a) \
+ (*GC_amiga_allocwrapper_do)(a,GC_malloc_atomic_uncollectable)
+# define GC_malloc_ignore_off_page(a) \
+ (*GC_amiga_allocwrapper_do)(a,GC_malloc_ignore_off_page)
+# define GC_malloc_atomic_ignore_off_page(a) \
+ (*GC_amiga_allocwrapper_do)(a,GC_malloc_atomic_ignore_off_page)
+# endif /* _AMIGA && !GC_AMIGA_MAKINGLIB */
+
+#endif /* GC_AMIGA_REDIRECTS_H */
+
+
Added: llvm-gcc-4.2/trunk/boehm-gc/include/gc_backptr.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/gc_backptr.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/gc_backptr.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/gc_backptr.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,65 @@
+/*
+ * This is a simple API to implement pointer back tracing, i.e.
+ * to answer questions such as "who is pointing to this" or
+ * "why is this object being retained by the collector"
+ *
+ * This API assumes that we have an ANSI C compiler.
+ *
+ * Most of these calls yield useful information on only after
+ * a garbage collection. Usually the client will first force
+ * a full collection and then gather information, preferably
+ * before much intervening allocation.
+ *
+ * The implementation of the interface is only about 99.9999%
+ * correct. It is intended to be good enough for profiling,
+ * but is not intended to be used with production code.
+ *
+ * Results are likely to be much more useful if all allocation is
+ * accomplished through the debugging allocators.
+ *
+ * The implementation idea is due to A. Demers.
+ */
+
+#ifndef GC_BACKPTR_H
+#define GC_BACKPTR_H
+/* Store information about the object referencing dest in *base_p */
+/* and *offset_p. */
+/* If multiple objects or roots point to dest, the one reported */
+/* will be the last on used by the garbage collector to trace the */
+/* object. */
+/* source is root ==> *base_p = address, *offset_p = 0 */
+/* source is heap object ==> *base_p != 0, *offset_p = offset */
+/* Returns 1 on success, 0 if source couldn't be determined. */
+/* Dest can be any address within a heap object. */
+typedef enum { GC_UNREFERENCED, /* No reference info available. */
+ GC_NO_SPACE, /* Dest not allocated with debug alloc */
+ GC_REFD_FROM_ROOT, /* Referenced directly by root *base_p */
+ GC_REFD_FROM_REG, /* Referenced from a register, i.e. */
+ /* a root without an address. */
+ GC_REFD_FROM_HEAP, /* Referenced from another heap obj. */
+ GC_FINALIZER_REFD /* Finalizable and hence accessible. */
+} GC_ref_kind;
+
+GC_ref_kind GC_get_back_ptr_info(void *dest, void **base_p, size_t *offset_p);
+
+/* Generate a random heap address. */
+/* The resulting address is in the heap, but */
+/* not necessarily inside a valid object. */
+void * GC_generate_random_heap_address(void);
+
+/* Generate a random address inside a valid marked heap object. */
+void * GC_generate_random_valid_address(void);
+
+/* Force a garbage collection and generate a backtrace from a */
+/* random heap address. */
+/* This uses the GC logging mechanism (GC_printf) to produce */
+/* output. It can often be called from a debugger. The */
+/* source in dbg_mlc.c also serves as a sample client. */
+void GC_generate_random_backtrace(void);
+
+/* Print a backtrace from a specific address. Used by the */
+/* above. The client should call GC_gcollect() immediately */
+/* before invocation. */
+void GC_print_backtrace(void *);
+
+#endif /* GC_BACKPTR_H */
Added: llvm-gcc-4.2/trunk/boehm-gc/include/gc_config.h.in
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/gc_config.h.in?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/gc_config.h.in (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/gc_config.h.in Thu Nov 8 16:56:19 2007
@@ -0,0 +1,185 @@
+/* include/gc_config.h.in. Generated from configure.ac by autoheader. */
+
+/* allows all pointers to the interior of objects to be recognized */
+#undef ALL_INTERIOR_POINTERS
+
+/* include code for GC_malloc_atomic_uncollectable */
+#undef ATOMIC_UNCOLLECTABLE
+
+/* No description */
+#undef DATASTART_IS_ETEXT
+
+/* Make sure that all objects have debug headers */
+#undef DBG_HDRS_ALL
+
+/* No description */
+#undef DGUX_THREADS
+
+/* Target is ECOS */
+#undef ECOS
+
+/* support AIX threads */
+#undef GC_AIX_THREADS
+
+/* support for Mac OS X pthreads */
+#undef GC_DARWIN_THREADS
+
+/* support for DB/UX on I386 threads */
+#undef GC_DGUX386_THREADS
+
+/* support FreeBSD threads */
+#undef GC_FREEBSD_THREADS
+
+/* include support for gcj */
+#undef GC_GCJ_SUPPORT
+
+/* enables support for HP/UX 11 pthreads */
+#undef GC_HPUX_THREADS
+
+/* support for Irix pthreads */
+#undef GC_IRIX_THREADS
+
+/* support for Xavier Leroy's Linux threads */
+#undef GC_LINUX_THREADS
+
+/* support for Tru64 pthreads */
+#undef GC_OSF1_THREADS
+
+/* support for Solaris pthreads */
+#undef GC_SOLARIS_PTHREADS
+
+/* support for Solaris (thr_) threads */
+#undef GC_SOLARIS_THREADS
+
+/* support for win32 threads */
+#undef GC_WIN32_THREADS
+
+/* ppc_thread_state64_t has field r0 */
+#undef HAS_PPC_THREAD_STATE64_R0
+
+/* ppc_thread_state64_t has field __r0 */
+#undef HAS_PPC_THREAD_STATE64___R0
+
+/* ppc_thread_state_t has field r0 */
+#undef HAS_PPC_THREAD_STATE_R0
+
+/* ppc_thread_state_t has field __r0 */
+#undef HAS_PPC_THREAD_STATE___R0
+
+/* x86_thread_state32_t has field eax */
+#undef HAS_X86_THREAD_STATE32_EAX
+
+/* x86_thread_state32_t has field __eax */
+#undef HAS_X86_THREAD_STATE32___EAX
+
+/* x86_thread_state64_t has field rax */
+#undef HAS_X86_THREAD_STATE64_RAX
+
+/* x86_thread_state64_t has field __rax */
+#undef HAS_X86_THREAD_STATE64___RAX
+
+/* Define to 1 if you have the <inttypes.h> header file. */
+#undef HAVE_INTTYPES_H
+
+/* Define to 1 if you have the <memory.h> header file. */
+#undef HAVE_MEMORY_H
+
+/* Define to 1 if you have the `pthread_getattr_np' function. */
+#undef HAVE_PTHREAD_GETATTR_NP
+
+/* Define to 1 if you have the <stdint.h> header file. */
+#undef HAVE_STDINT_H
+
+/* Define to 1 if you have the <stdlib.h> header file. */
+#undef HAVE_STDLIB_H
+
+/* Define to 1 if you have the <strings.h> header file. */
+#undef HAVE_STRINGS_H
+
+/* Define to 1 if you have the <string.h> header file. */
+#undef HAVE_STRING_H
+
+/* Define to 1 if you have the <sys/stat.h> header file. */
+#undef HAVE_SYS_STAT_H
+
+/* Define to 1 if you have the <sys/types.h> header file. */
+#undef HAVE_SYS_TYPES_H
+
+/* Define to 1 if you have the <unistd.h> header file. */
+#undef HAVE_UNISTD_H
+
+/* make it somewhat safer to finalize objects out of order */
+#undef JAVA_FINALIZATION
+
+/* Add code to save back pointers */
+#undef KEEP_BACK_PTRS
+
+/* Enable GC_PRINT_BACK_HEIGHT environment variable */
+#undef MAKE_BACK_GRAPH
+
+/* removes GC_dump */
+#undef NO_DEBUGGING
+
+/* cause some or all of the heap to not have execute permission */
+#undef NO_EXECUTE_PERMISSION
+
+/* Define to 1 if your C compiler doesn't accept -c and -o together. */
+#undef NO_MINUS_C_MINUS_O
+
+/* does not disable signals */
+#undef NO_SIGNALS
+
+/* use empty GC_disable_signals and GC_enable_signals */
+#undef NO_SIGSET
+
+/* Define to the address where bug reports for this package should be sent. */
+#undef PACKAGE_BUGREPORT
+
+/* Define to the full name of this package. */
+#undef PACKAGE_NAME
+
+/* Define to the full name and version of this package. */
+#undef PACKAGE_STRING
+
+/* Define to the one symbol short name of this package. */
+#undef PACKAGE_TARNAME
+
+/* Define to the version of this package. */
+#undef PACKAGE_VERSION
+
+/* allow the marker to run in multiple threads */
+#undef PARALLEL_MARK
+
+/* number of call frames saved with objects allocated through the debugging
+ interface */
+#undef SAVE_CALL_COUNT
+
+/* disables statistics printing */
+#undef SILENT
+
+/* PROC_VDB in Solaris 2.5 gives wrong values for dirty bits */
+#undef SOLARIS25_PROC_VDB_BUG_FIXED
+
+/* No description */
+#undef STACKBASE
+
+/* Define to 1 if you have the ANSI C header files. */
+#undef STDC_HEADERS
+
+/* Avoid Solaris 5.3 dynamic library bug */
+#undef SUNOS53_SHARED_LIB
+
+/* define GC_local_malloc() & GC_local_malloc_atomic() */
+#undef THREAD_LOCAL_ALLOC
+
+/* use tls for boehm */
+#undef USE_COMPILER_TLS
+
+/* use MMAP instead of sbrk to get new memory */
+#undef USE_MMAP
+
+/* POSIX version of C Source */
+#undef _POSIX_C_SOURCE
+
+/* Required define if using POSIX threads */
+#undef _REENTRANT
Added: llvm-gcc-4.2/trunk/boehm-gc/include/gc_config_macros.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/gc_config_macros.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/gc_config_macros.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/gc_config_macros.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,119 @@
+/*
+ * This should never be included directly. It is included only from gc.h.
+ * We separate it only to make gc.h more suitable as documentation.
+ */
+#if !defined(_REENTRANT) && (defined(GC_SOLARIS_THREADS) \
+ || defined(GC_SOLARIS_PTHREADS) \
+ || defined(GC_HPUX_THREADS) \
+ || defined(GC_AIX_THREADS) \
+ || defined(GC_LINUX_THREADS))
+# define _REENTRANT
+ /* Better late than never. This fails if system headers that */
+ /* depend on this were previously included. */
+#endif
+
+#if defined(GC_DGUX386_THREADS) && !defined(_POSIX4A_DRAFT10_SOURCE)
+# define _POSIX4A_DRAFT10_SOURCE 1
+#endif
+
+# if defined(GC_SOLARIS_PTHREADS) || defined(GC_FREEBSD_THREADS) || \
+ defined(GC_IRIX_THREADS) || defined(GC_LINUX_THREADS) || \
+ defined(GC_HPUX_THREADS) || defined(GC_OSF1_THREADS) || \
+ defined(GC_DGUX386_THREADS) || defined(GC_DARWIN_THREADS) || \
+ defined(GC_AIX_THREADS) || \
+ (defined(GC_WIN32_THREADS) && defined(__CYGWIN32__))
+# define GC_PTHREADS
+# endif
+
+#if defined(GC_THREADS) && !defined(GC_PTHREADS)
+# if defined(__linux__)
+# define GC_LINUX_THREADS
+# define GC_PTHREADS
+# endif
+# if !defined(LINUX) && (defined(_PA_RISC1_1) || defined(_PA_RISC2_0) \
+ || defined(hppa) || defined(__HPPA))
+# define GC_HPUX_THREADS
+# define GC_PTHREADS
+# endif
+# if !defined(__linux__) && (defined(__alpha) || defined(__alpha__))
+# define GC_OSF1_THREADS
+# define GC_PTHREADS
+# endif
+# if defined(__mips) && !defined(__linux__)
+# define GC_IRIX_THREADS
+# define GC_PTHREADS
+# endif
+# if defined(__sparc) && !defined(__linux__)
+# define GC_SOLARIS_PTHREADS
+# define GC_PTHREADS
+# endif
+# if defined(__APPLE__) && defined(__MACH__) && defined(__ppc__)
+# define GC_DARWIN_THREADS
+# define GC_PTHREADS
+# endif
+# if !defined(GC_PTHREADS) && defined(__FreeBSD__)
+# define GC_FREEBSD_THREADS
+# define GC_PTHREADS
+# endif
+# if defined(DGUX) && (defined(i386) || defined(__i386__))
+# define GC_DGUX386_THREADS
+# define GC_PTHREADS
+# endif
+# if defined(_AIX)
+# define GC_AIX_THREADS
+# define GC_PTHREADS
+# endif
+#endif /* GC_THREADS */
+
+#if defined(GC_THREADS) && !defined(GC_PTHREADS) && \
+ (defined(_WIN32) || defined(_MSC_VER) || defined(__CYGWIN__) \
+ || defined(__MINGW32__) || defined(__BORLANDC__) \
+ || defined(_WIN32_WCE))
+# define GC_WIN32_THREADS
+#endif
+
+#if defined(GC_SOLARIS_PTHREADS) && !defined(GC_SOLARIS_THREADS)
+# define GC_SOLARIS_THREADS
+#endif
+
+# define __GC
+# ifndef _WIN32_WCE
+# include <stddef.h>
+# else /* ! _WIN32_WCE */
+/* Yet more kluges for WinCE */
+# include <stdlib.h> /* size_t is defined here */
+ typedef long ptrdiff_t; /* ptrdiff_t is not defined */
+# endif
+
+#if defined(_DLL) && !defined(GC_NOT_DLL) && !defined(GC_DLL)
+# define GC_DLL
+#endif
+
+#if defined(__MINGW32__) && defined(GC_DLL)
+# ifdef GC_BUILD
+# define GC_API __declspec(dllexport)
+# else
+# define GC_API __declspec(dllimport)
+# endif
+#endif
+
+#if (defined(__DMC__) || defined(_MSC_VER)) && defined(GC_DLL)
+# ifdef GC_BUILD
+# define GC_API extern __declspec(dllexport)
+# else
+# define GC_API __declspec(dllimport)
+# endif
+#endif
+
+#if defined(__WATCOMC__) && defined(GC_DLL)
+# ifdef GC_BUILD
+# define GC_API extern __declspec(dllexport)
+# else
+# define GC_API extern __declspec(dllimport)
+# endif
+#endif
+
+#ifndef GC_API
+#define GC_API extern
+#endif
+
Added: llvm-gcc-4.2/trunk/boehm-gc/include/gc_cpp.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/gc_cpp.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/gc_cpp.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/gc_cpp.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,367 @@
+#ifndef GC_CPP_H
+#define GC_CPP_H
+/****************************************************************************
+Copyright (c) 1994 by Xerox Corporation. All rights reserved.
+
+THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+
+Permission is hereby granted to use or copy this program for any
+purpose, provided the above notices are retained on all copies.
+Permission to modify the code and to distribute modified code is
+granted, provided the above notices are retained, and a notice that
+the code was modified is included with the above copyright notice.
+****************************************************************************
+
+C++ Interface to the Boehm Collector
+
+ John R. Ellis and Jesse Hull
+
+This interface provides access to the Boehm collector. It provides
+basic facilities similar to those described in "Safe, Efficient
+Garbage Collection for C++", by John R. Elis and David L. Detlefs
+(ftp://ftp.parc.xerox.com/pub/ellis/gc).
+
+All heap-allocated objects are either "collectable" or
+"uncollectable". Programs must explicitly delete uncollectable
+objects, whereas the garbage collector will automatically delete
+collectable objects when it discovers them to be inaccessible.
+Collectable objects may freely point at uncollectable objects and vice
+versa.
+
+Objects allocated with the built-in "::operator new" are uncollectable.
+
+Objects derived from class "gc" are collectable. For example:
+
+ class A: public gc {...};
+ A* a = new A; // a is collectable.
+
+Collectable instances of non-class types can be allocated using the GC
+(or UseGC) placement:
+
+ typedef int A[ 10 ];
+ A* a = new (GC) A;
+
+Uncollectable instances of classes derived from "gc" can be allocated
+using the NoGC placement:
+
+ class A: public gc {...};
+ A* a = new (NoGC) A; // a is uncollectable.
+
+Both uncollectable and collectable objects can be explicitly deleted
+with "delete", which invokes an object's destructors and frees its
+storage immediately.
+
+A collectable object may have a clean-up function, which will be
+invoked when the collector discovers the object to be inaccessible.
+An object derived from "gc_cleanup" or containing a member derived
+from "gc_cleanup" has a default clean-up function that invokes the
+object's destructors. Explicit clean-up functions may be specified as
+an additional placement argument:
+
+ A* a = ::new (GC, MyCleanup) A;
+
+An object is considered "accessible" by the collector if it can be
+reached by a path of pointers from static variables, automatic
+variables of active functions, or from some object with clean-up
+enabled; pointers from an object to itself are ignored.
+
+Thus, if objects A and B both have clean-up functions, and A points at
+B, B is considered accessible. After A's clean-up is invoked and its
+storage released, B will then become inaccessible and will have its
+clean-up invoked. If A points at B and B points to A, forming a
+cycle, then that's considered a storage leak, and neither will be
+collectable. See the interface gc.h for low-level facilities for
+handling such cycles of objects with clean-up.
+
+The collector cannot guarantee that it will find all inaccessible
+objects. In practice, it finds almost all of them.
+
+
+Cautions:
+
+1. Be sure the collector has been augmented with "make c++".
+
+2. If your compiler supports the new "operator new[]" syntax, then
+add -DGC_OPERATOR_NEW_ARRAY to the Makefile.
+
+If your compiler doesn't support "operator new[]", beware that an
+array of type T, where T is derived from "gc", may or may not be
+allocated as a collectable object (it depends on the compiler). Use
+the explicit GC placement to make the array collectable. For example:
+
+ class A: public gc {...};
+ A* a1 = new A[ 10 ]; // collectable or uncollectable?
+ A* a2 = new (GC) A[ 10 ]; // collectable
+
+3. The destructors of collectable arrays of objects derived from
+"gc_cleanup" will not be invoked properly. For example:
+
+ class A: public gc_cleanup {...};
+ A* a = new (GC) A[ 10 ]; // destructors not invoked correctly
+
+Typically, only the destructor for the first element of the array will
+be invoked when the array is garbage-collected. To get all the
+destructors of any array executed, you must supply an explicit
+clean-up function:
+
+ A* a = new (GC, MyCleanUp) A[ 10 ];
+
+(Implementing clean-up of arrays correctly, portably, and in a way
+that preserves the correct exception semantics requires a language
+extension, e.g. the "gc" keyword.)
+
+4. Compiler bugs:
+
+* Solaris 2's CC (SC3.0) doesn't implement t->~T() correctly, so the
+destructors of classes derived from gc_cleanup won't be invoked.
+You'll have to explicitly register a clean-up function with
+new-placement syntax.
+
+* Evidently cfront 3.0 does not allow destructors to be explicitly
+invoked using the ANSI-conforming syntax t->~T(). If you're using
+cfront 3.0, you'll have to comment out the class gc_cleanup, which
+uses explicit invocation.
+
+5. GC name conflicts:
+
+Many other systems seem to use the identifier "GC" as an abbreviation
+for "Graphics Context". Since version 5.0, GC placement has been replaced
+by UseGC. GC is an alias for UseGC, unless GC_NAME_CONFLICT is defined.
+
+****************************************************************************/
+
+#include "gc.h"
+
+#ifndef THINK_CPLUS
+# define GC_cdecl
+#else
+# define GC_cdecl _cdecl
+#endif
+
+#if ! defined( GC_NO_OPERATOR_NEW_ARRAY ) \
+ && !defined(_ENABLE_ARRAYNEW) /* Digimars */ \
+ && (defined(__BORLANDC__) && (__BORLANDC__ < 0x450) \
+ || (defined(__GNUC__) && \
+ (__GNUC__ < 2 || __GNUC__ == 2 && __GNUC_MINOR__ < 6)) \
+ || (defined(__WATCOMC__) && __WATCOMC__ < 1050))
+# define GC_NO_OPERATOR_NEW_ARRAY
+#endif
+
+#if !defined(GC_NO_OPERATOR_NEW_ARRAY) && !defined(GC_OPERATOR_NEW_ARRAY)
+# define GC_OPERATOR_NEW_ARRAY
+#endif
+
+#if ! defined ( __BORLANDC__ ) /* Confuses the Borland compiler. */ \
+ && ! defined ( __sgi )
+# define GC_PLACEMENT_DELETE
+#endif
+
+enum GCPlacement {UseGC,
+#ifndef GC_NAME_CONFLICT
+ GC=UseGC,
+#endif
+ NoGC, PointerFreeGC};
+
+class gc {public:
+ inline void* operator new( size_t size );
+ inline void* operator new( size_t size, GCPlacement gcp );
+ inline void* operator new( size_t size, void *p );
+ /* Must be redefined here, since the other overloadings */
+ /* hide the global definition. */
+ inline void operator delete( void* obj );
+# ifdef GC_PLACEMENT_DELETE
+ inline void operator delete( void*, void* );
+# endif
+
+#ifdef GC_OPERATOR_NEW_ARRAY
+ inline void* operator new[]( size_t size );
+ inline void* operator new[]( size_t size, GCPlacement gcp );
+ inline void* operator new[]( size_t size, void *p );
+ inline void operator delete[]( void* obj );
+# ifdef GC_PLACEMENT_DELETE
+ inline void gc::operator delete[]( void*, void* );
+# endif
+#endif /* GC_OPERATOR_NEW_ARRAY */
+ };
+ /*
+ Instances of classes derived from "gc" will be allocated in the
+ collected heap by default, unless an explicit NoGC placement is
+ specified. */
+
+class gc_cleanup: virtual public gc {public:
+ inline gc_cleanup();
+ inline virtual ~gc_cleanup();
+private:
+ inline static void GC_cdecl cleanup( void* obj, void* clientData );};
+ /*
+ Instances of classes derived from "gc_cleanup" will be allocated
+ in the collected heap by default. When the collector discovers an
+ inaccessible object derived from "gc_cleanup" or containing a
+ member derived from "gc_cleanup", its destructors will be
+ invoked. */
+
+extern "C" {typedef void (*GCCleanUpFunc)( void* obj, void* clientData );}
+
+#ifdef _MSC_VER
+ // Disable warning that "no matching operator delete found; memory will
+ // not be freed if initialization throws an exception"
+# pragma warning(disable:4291)
+#endif
+
+inline void* operator new(
+ size_t size,
+ GCPlacement gcp,
+ GCCleanUpFunc cleanup = 0,
+ void* clientData = 0 );
+ /*
+ Allocates a collectable or uncollected object, according to the
+ value of "gcp".
+
+ For collectable objects, if "cleanup" is non-null, then when the
+ allocated object "obj" becomes inaccessible, the collector will
+ invoke the function "cleanup( obj, clientData )" but will not
+ invoke the object's destructors. It is an error to explicitly
+ delete an object allocated with a non-null "cleanup".
+
+ It is an error to specify a non-null "cleanup" with NoGC or for
+ classes derived from "gc_cleanup" or containing members derived
+ from "gc_cleanup". */
+
+
+#ifdef _MSC_VER
+ /** This ensures that the system default operator new[] doesn't get
+ * undefined, which is what seems to happen on VC++ 6 for some reason
+ * if we define a multi-argument operator new[].
+ * There seems to be really redirect new in this environment without
+ * including this everywhere.
+ */
+ void *operator new[]( size_t size );
+
+ void operator delete[](void* obj);
+
+ void* operator new( size_t size);
+
+ void operator delete(void* obj);
+
+ // This new operator is used by VC++ in case of Debug builds !
+ void* operator new( size_t size,
+ int ,//nBlockUse,
+ const char * szFileName,
+ int nLine );
+#endif /* _MSC_VER */
+
+
+#ifdef GC_OPERATOR_NEW_ARRAY
+
+inline void* operator new[](
+ size_t size,
+ GCPlacement gcp,
+ GCCleanUpFunc cleanup = 0,
+ void* clientData = 0 );
+ /*
+ The operator new for arrays, identical to the above. */
+
+#endif /* GC_OPERATOR_NEW_ARRAY */
+
+/****************************************************************************
+
+Inline implementation
+
+****************************************************************************/
+
+inline void* gc::operator new( size_t size ) {
+ return GC_MALLOC( size );}
+
+inline void* gc::operator new( size_t size, GCPlacement gcp ) {
+ if (gcp == UseGC)
+ return GC_MALLOC( size );
+ else if (gcp == PointerFreeGC)
+ return GC_MALLOC_ATOMIC( size );
+ else
+ return GC_MALLOC_UNCOLLECTABLE( size );}
+
+inline void* gc::operator new( size_t size, void *p ) {
+ return p;}
+
+inline void gc::operator delete( void* obj ) {
+ GC_FREE( obj );}
+
+#ifdef GC_PLACEMENT_DELETE
+ inline void gc::operator delete( void*, void* ) {}
+#endif
+
+#ifdef GC_OPERATOR_NEW_ARRAY
+
+inline void* gc::operator new[]( size_t size ) {
+ return gc::operator new( size );}
+
+inline void* gc::operator new[]( size_t size, GCPlacement gcp ) {
+ return gc::operator new( size, gcp );}
+
+inline void* gc::operator new[]( size_t size, void *p ) {
+ return p;}
+
+inline void gc::operator delete[]( void* obj ) {
+ gc::operator delete( obj );}
+
+#ifdef GC_PLACEMENT_DELETE
+ inline void gc::operator delete[]( void*, void* ) {}
+#endif
+
+#endif /* GC_OPERATOR_NEW_ARRAY */
+
+
+inline gc_cleanup::~gc_cleanup() {
+ GC_register_finalizer_ignore_self( GC_base(this), 0, 0, 0, 0 );}
+
+inline void gc_cleanup::cleanup( void* obj, void* displ ) {
+ ((gc_cleanup*) ((char*) obj + (ptrdiff_t) displ))->~gc_cleanup();}
+
+inline gc_cleanup::gc_cleanup() {
+ GC_finalization_proc oldProc;
+ void* oldData;
+ void* base = GC_base( (void *) this );
+ if (0 != base) {
+ // Don't call the debug version, since this is a real base address.
+ GC_register_finalizer_ignore_self(
+ base, (GC_finalization_proc)cleanup, (void*) ((char*) this - (char*) base),
+ &oldProc, &oldData );
+ if (0 != oldProc) {
+ GC_register_finalizer_ignore_self( base, oldProc, oldData, 0, 0 );}}}
+
+inline void* operator new(
+ size_t size,
+ GCPlacement gcp,
+ GCCleanUpFunc cleanup,
+ void* clientData )
+{
+ void* obj;
+
+ if (gcp == UseGC) {
+ obj = GC_MALLOC( size );
+ if (cleanup != 0)
+ GC_REGISTER_FINALIZER_IGNORE_SELF(
+ obj, cleanup, clientData, 0, 0 );}
+ else if (gcp == PointerFreeGC) {
+ obj = GC_MALLOC_ATOMIC( size );}
+ else {
+ obj = GC_MALLOC_UNCOLLECTABLE( size );};
+ return obj;}
+
+
+#ifdef GC_OPERATOR_NEW_ARRAY
+
+inline void* operator new[](
+ size_t size,
+ GCPlacement gcp,
+ GCCleanUpFunc cleanup,
+ void* clientData )
+{
+ return ::operator new( size, gcp, cleanup, clientData );}
+
+#endif /* GC_OPERATOR_NEW_ARRAY */
+
+
+#endif /* GC_CPP_H */
+
Added: llvm-gcc-4.2/trunk/boehm-gc/include/gc_ext_config.h.in
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/gc_ext_config.h.in?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/gc_ext_config.h.in (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/gc_ext_config.h.in Thu Nov 8 16:56:19 2007
@@ -0,0 +1,7 @@
+/* include/gc_ext_config.h.in. This contains definitions needed by
+external clients that do not want to include the full gc.h. Currently this
+is used by libjava/include/boehm-gc.h. */
+
+#undef THREAD_LOCAL_ALLOC
+
+#undef HAVE_PTHREAD_GETATTR_NP
Added: llvm-gcc-4.2/trunk/boehm-gc/include/gc_gcj.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/gc_gcj.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/gc_gcj.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/gc_gcj.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,113 @@
+/*
+ * Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
+ * Copyright (c) 1991-1995 by Xerox Corporation. All rights reserved.
+ * Copyright 1996-1999 by Silicon Graphics. All rights reserved.
+ * Copyright 1999 by Hewlett-Packard Company. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ */
+
+/* This file assumes the collector has been compiled with GC_GCJ_SUPPORT */
+/* and that an ANSI C compiler is available. */
+
+/*
+ * We allocate objects whose first word contains a pointer to a struct
+ * describing the object type. This struct contains a garbage collector mark
+ * descriptor at offset MARK_DESCR_OFFSET. Alternatively, the objects
+ * may be marked by the mark procedure passed to GC_init_gcj_malloc.
+ */
+
+#ifndef GC_GCJ_H
+
+#define GC_GCJ_H
+
+#ifndef MARK_DESCR_OFFSET
+# define MARK_DESCR_OFFSET sizeof(word)
+#endif
+ /* Gcj keeps GC descriptor as second word of vtable. This */
+ /* probably needs to be adjusted for other clients. */
+ /* We currently assume that this offset is such that: */
+ /* - all objects of this kind are large enough to have */
+ /* a value at that offset, and */
+ /* - it is not zero. */
+ /* These assumptions allow objects on the free list to be */
+ /* marked normally. */
+
+#ifndef _GC_H
+# include "gc.h"
+#endif
+
+/* The following allocators signal an out of memory condition with */
+/* return GC_oom_fn(bytes); */
+
+/* The following function must be called before the gcj allocators */
+/* can be invoked. */
+/* mp_index and mp are the index and mark_proc (see gc_mark.h) */
+/* respectively for the allocated objects. Mark_proc will be */
+/* used to build the descriptor for objects allocated through the */
+/* debugging interface. The mark_proc will be invoked on all such */
+/* objects with an "environment" value of 1. The client may choose */
+/* to use the same mark_proc for some of its generated mark descriptors.*/
+/* In that case, it should use a different "environment" value to */
+/* detect the presence or absence of the debug header. */
+/* Mp is really of type mark_proc, as defined in gc_mark.h. We don't */
+/* want to include that here for namespace pollution reasons. */
+extern void GC_init_gcj_malloc(int mp_index, void * /* really mark_proc */mp);
+
+/* Allocate an object, clear it, and store the pointer to the */
+/* type structure (vtable in gcj). */
+/* This adds a byte at the end of the object if GC_malloc would.*/
+extern void * GC_gcj_malloc(size_t lb, void * ptr_to_struct_containing_descr);
+/* The debug versions allocate such that the specified mark_proc */
+/* is always invoked. */
+extern void * GC_debug_gcj_malloc(size_t lb,
+ void * ptr_to_struct_containing_descr,
+ GC_EXTRA_PARAMS);
+
+/* Similar to the above, but the size is in words, and we don't */
+/* adjust it. The size is assumed to be such that it can be */
+/* allocated as a small object. */
+/* Unless it is known that the collector is not configured */
+/* with USE_MARK_BYTES and unless it is known that the object */
+/* has weak alignment requirements, lw must be even. */
+extern void * GC_gcj_fast_malloc(size_t lw,
+ void * ptr_to_struct_containing_descr);
+extern void * GC_debug_gcj_fast_malloc(size_t lw,
+ void * ptr_to_struct_containing_descr,
+ GC_EXTRA_PARAMS);
+
+/* Similar to GC_gcj_malloc, but assumes that a pointer to near the */
+/* beginning of the resulting object is always maintained. */
+extern void * GC_gcj_malloc_ignore_off_page(size_t lb,
+ void * ptr_to_struct_containing_descr);
+
+/* The kind numbers of normal and debug gcj objects. */
+/* Useful only for debug support, we hope. */
+extern int GC_gcj_kind;
+
+extern int GC_gcj_debug_kind;
+
+# if defined(GC_LOCAL_ALLOC_H) && defined(GC_REDIRECT_TO_LOCAL)
+ --> gc_local_alloc.h should be included after this. Otherwise
+ --> we undo the redirection.
+# endif
+
+# ifdef GC_DEBUG
+# define GC_GCJ_MALLOC(s,d) GC_debug_gcj_malloc(s,d,GC_EXTRAS)
+# define GC_GCJ_FAST_MALLOC(s,d) GC_debug_gcj_fast_malloc(s,d,GC_EXTRAS)
+# define GC_GCJ_MALLOC_IGNORE_OFF_PAGE(s,d) GC_debug_gcj_malloc(s,d,GC_EXTRAS)
+# else
+# define GC_GCJ_MALLOC(s,d) GC_gcj_malloc(s,d)
+# define GC_GCJ_FAST_MALLOC(s,d) GC_gcj_fast_malloc(s,d)
+# define GC_GCJ_MALLOC_IGNORE_OFF_PAGE(s,d) \
+ GC_gcj_malloc_ignore_off_page(s,d)
+# endif
+
+#endif /* GC_GCJ_H */
Added: llvm-gcc-4.2/trunk/boehm-gc/include/gc_inl.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/gc_inl.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/gc_inl.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/gc_inl.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,107 @@
+/*
+ * Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
+ * Copyright (c) 1991-1995 by Xerox Corporation. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ */
+/* Boehm, October 3, 1995 2:07 pm PDT */
+
+# ifndef GC_PRIVATE_H
+# include "private/gc_priv.h"
+# endif
+
+/* USE OF THIS FILE IS NOT RECOMMENDED unless GC_all_interior_pointers */
+/* is always set, or the collector has been built with */
+/* -DDONT_ADD_BYTE_AT_END, or the specified size includes a pointerfree */
+/* word at the end. In the standard collector configuration, */
+/* the final word of each object may not be scanned. */
+/* This iinterface is most useful for compilers that generate C. */
+/* Manual use is hereby discouraged. */
+
+/* Allocate n words (NOT BYTES). X is made to point to the result. */
+/* It is assumed that n < MAXOBJSZ, and */
+/* that n > 0. On machines requiring double word alignment of some */
+/* data, we also assume that n is 1 or even. */
+/* If the collector is built with -DUSE_MARK_BYTES or -DPARALLEL_MARK, */
+/* the n = 1 case is also disallowed. */
+/* Effectively this means that portable code should make sure n is even.*/
+/* This bypasses the */
+/* MERGE_SIZES mechanism. In order to minimize the number of distinct */
+/* free lists that are maintained, the caller should ensure that a */
+/* small number of distinct values of n are used. (The MERGE_SIZES */
+/* mechanism normally does this by ensuring that only the leading three */
+/* bits of n may be nonzero. See misc.c for details.) We really */
+/* recommend this only in cases in which n is a constant, and no */
+/* locking is required. */
+/* In that case it may allow the compiler to perform substantial */
+/* additional optimizations. */
+# define GC_MALLOC_WORDS(result,n) \
+{ \
+ register ptr_t op; \
+ register ptr_t *opp; \
+ DCL_LOCK_STATE; \
+ \
+ opp = &(GC_objfreelist[n]); \
+ FASTLOCK(); \
+ if( !FASTLOCK_SUCCEEDED() || (op = *opp) == 0 ) { \
+ FASTUNLOCK(); \
+ (result) = GC_generic_malloc_words_small((n), NORMAL); \
+ } else { \
+ *opp = obj_link(op); \
+ obj_link(op) = 0; \
+ GC_words_allocd += (n); \
+ FASTUNLOCK(); \
+ (result) = (GC_PTR) op; \
+ } \
+}
+
+
+/* The same for atomic objects: */
+# define GC_MALLOC_ATOMIC_WORDS(result,n) \
+{ \
+ register ptr_t op; \
+ register ptr_t *opp; \
+ DCL_LOCK_STATE; \
+ \
+ opp = &(GC_aobjfreelist[n]); \
+ FASTLOCK(); \
+ if( !FASTLOCK_SUCCEEDED() || (op = *opp) == 0 ) { \
+ FASTUNLOCK(); \
+ (result) = GC_generic_malloc_words_small((n), PTRFREE); \
+ } else { \
+ *opp = obj_link(op); \
+ obj_link(op) = 0; \
+ GC_words_allocd += (n); \
+ FASTUNLOCK(); \
+ (result) = (GC_PTR) op; \
+ } \
+}
+
+/* And once more for two word initialized objects: */
+# define GC_CONS(result, first, second) \
+{ \
+ register ptr_t op; \
+ register ptr_t *opp; \
+ DCL_LOCK_STATE; \
+ \
+ opp = &(GC_objfreelist[2]); \
+ FASTLOCK(); \
+ if( !FASTLOCK_SUCCEEDED() || (op = *opp) == 0 ) { \
+ FASTUNLOCK(); \
+ op = GC_generic_malloc_words_small(2, NORMAL); \
+ } else { \
+ *opp = obj_link(op); \
+ GC_words_allocd += 2; \
+ FASTUNLOCK(); \
+ } \
+ ((word *)op)[0] = (word)(first); \
+ ((word *)op)[1] = (word)(second); \
+ (result) = (GC_PTR) op; \
+}
Added: llvm-gcc-4.2/trunk/boehm-gc/include/gc_inline.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/gc_inline.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/gc_inline.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/gc_inline.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1 @@
+# include "gc_inl.h"
Added: llvm-gcc-4.2/trunk/boehm-gc/include/gc_local_alloc.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/gc_local_alloc.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/gc_local_alloc.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/gc_local_alloc.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,89 @@
+/*
+ * Copyright (c) 2000 by Hewlett-Packard Company. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ */
+
+/*
+ * Interface for thread local allocation. Memory obtained
+ * this way can be used by all threads, as though it were obtained
+ * from an allocator like GC_malloc. The difference is that GC_local_malloc
+ * counts the number of allocations of a given size from the current thread,
+ * and uses GC_malloc_many to perform the allocations once a threashold
+ * is exceeded. Thus far less synchronization may be needed.
+ * Allocation of known large objects should not use this interface.
+ * This interface is designed primarily for fast allocation of small
+ * objects on multiprocessors, e.g. for a JVM running on an MP server.
+ *
+ * If this file is included with GC_GCJ_SUPPORT defined, GCJ-style
+ * bitmap allocation primitives will also be included.
+ *
+ * If this file is included with GC_REDIRECT_TO_LOCAL defined, then
+ * GC_MALLOC, GC_MALLOC_ATOMIC, and possibly GC_GCJ_MALLOC will
+ * be redefined to use the thread local allocatoor.
+ *
+ * The interface is available only if the collector is built with
+ * -DTHREAD_LOCAL_ALLOC, which is currently supported only on Linux.
+ *
+ * The debugging allocators use standard, not thread-local allocation.
+ *
+ * These routines normally require an explicit call to GC_init(), though
+ * that may be done from a constructor function.
+ */
+
+#ifndef GC_LOCAL_ALLOC_H
+#define GC_LOCAL_ALLOC_H
+
+#ifndef _GC_H
+# include "gc.h"
+#endif
+
+#if defined(GC_GCJ_SUPPORT) && !defined(GC_GCJ_H)
+# include "gc_gcj.h"
+#endif
+
+/* We assume ANSI C for this interface. */
+
+GC_PTR GC_local_malloc(size_t bytes);
+
+GC_PTR GC_local_malloc_atomic(size_t bytes);
+
+#if defined(GC_GCJ_SUPPORT)
+ GC_PTR GC_local_gcj_malloc(size_t bytes,
+ void * ptr_to_struct_containing_descr);
+#endif
+
+# ifdef GC_DEBUG
+ /* We don't really use local allocation in this case. */
+# define GC_LOCAL_MALLOC(s) GC_debug_malloc(s,GC_EXTRAS)
+# define GC_LOCAL_MALLOC_ATOMIC(s) GC_debug_malloc_atomic(s,GC_EXTRAS)
+# ifdef GC_GCJ_SUPPORT
+# define GC_LOCAL_GCJ_MALLOC(s,d) GC_debug_gcj_malloc(s,d,GC_EXTRAS)
+# endif
+# else
+# define GC_LOCAL_MALLOC(s) GC_local_malloc(s)
+# define GC_LOCAL_MALLOC_ATOMIC(s) GC_local_malloc_atomic(s)
+# ifdef GC_GCJ_SUPPORT
+# define GC_LOCAL_GCJ_MALLOC(s,d) GC_local_gcj_malloc(s,d)
+# endif
+# endif
+
+# ifdef GC_REDIRECT_TO_LOCAL
+# undef GC_MALLOC
+# define GC_MALLOC(s) GC_LOCAL_MALLOC(s)
+# undef GC_MALLOC_ATOMIC
+# define GC_MALLOC_ATOMIC(s) GC_LOCAL_MALLOC_ATOMIC(s)
+# ifdef GC_GCJ_SUPPORT
+# undef GC_GCJ_MALLOC
+# define GC_GCJ_MALLOC(s,d) GC_LOCAL_GCJ_MALLOC(s,d)
+# endif
+# endif
+
+#endif /* GC_LOCAL_ALLOC_H */
Added: llvm-gcc-4.2/trunk/boehm-gc/include/gc_mark.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/gc_mark.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/gc_mark.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/gc_mark.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,203 @@
+/*
+ * Copyright (c) 1991-1994 by Xerox Corporation. All rights reserved.
+ * Copyright (c) 2001 by Hewlett-Packard Company. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ *
+ */
+
+/*
+ * This contains interfaces to the GC marker that are likely to be useful to
+ * clients that provide detailed heap layout information to the collector.
+ * This interface should not be used by normal C or C++ clients.
+ * It will be useful to runtimes for other languages.
+ *
+ * This is an experts-only interface! There are many ways to break the
+ * collector in subtle ways by using this functionality.
+ */
+#ifndef GC_MARK_H
+# define GC_MARK_H
+
+# ifndef GC_H
+# include "gc.h"
+# endif
+
+/* A client supplied mark procedure. Returns new mark stack pointer. */
+/* Primary effect should be to push new entries on the mark stack. */
+/* Mark stack pointer values are passed and returned explicitly. */
+/* Global variables decribing mark stack are not necessarily valid. */
+/* (This usually saves a few cycles by keeping things in registers.) */
+/* Assumed to scan about GC_PROC_BYTES on average. If it needs to do */
+/* much more work than that, it should do it in smaller pieces by */
+/* pushing itself back on the mark stack. */
+/* Note that it should always do some work (defined as marking some */
+/* objects) before pushing more than one entry on the mark stack. */
+/* This is required to ensure termination in the event of mark stack */
+/* overflows. */
+/* This procedure is always called with at least one empty entry on the */
+/* mark stack. */
+/* Currently we require that mark procedures look for pointers in a */
+/* subset of the places the conservative marker would. It must be safe */
+/* to invoke the normal mark procedure instead. */
+/* WARNING: Such a mark procedure may be invoked on an unused object */
+/* residing on a free list. Such objects are cleared, except for a */
+/* free list link field in the first word. Thus mark procedures may */
+/* not count on the presence of a type descriptor, and must handle this */
+/* case correctly somehow. */
+# define GC_PROC_BYTES 100
+struct GC_ms_entry;
+typedef struct GC_ms_entry * (*GC_mark_proc) GC_PROTO((
+ GC_word * addr, struct GC_ms_entry * mark_stack_ptr,
+ struct GC_ms_entry * mark_stack_limit, GC_word env));
+
+# define GC_LOG_MAX_MARK_PROCS 6
+# define GC_MAX_MARK_PROCS (1 << GC_LOG_MAX_MARK_PROCS)
+
+/* In a few cases it's necessary to assign statically known indices to */
+/* certain mark procs. Thus we reserve a few for well known clients. */
+/* (This is necessary if mark descriptors are compiler generated.) */
+#define GC_RESERVED_MARK_PROCS 8
+# define GC_GCJ_RESERVED_MARK_PROC_INDEX 0
+
+/* Object descriptors on mark stack or in objects. Low order two */
+/* bits are tags distinguishing among the following 4 possibilities */
+/* for the high order 30 bits. */
+#define GC_DS_TAG_BITS 2
+#define GC_DS_TAGS ((1 << GC_DS_TAG_BITS) - 1)
+#define GC_DS_LENGTH 0 /* The entire word is a length in bytes that */
+ /* must be a multiple of 4. */
+#define GC_DS_BITMAP 1 /* 30 (62) bits are a bitmap describing pointer */
+ /* fields. The msb is 1 iff the first word */
+ /* is a pointer. */
+ /* (This unconventional ordering sometimes */
+ /* makes the marker slightly faster.) */
+ /* Zeroes indicate definite nonpointers. Ones */
+ /* indicate possible pointers. */
+ /* Only usable if pointers are word aligned. */
+#define GC_DS_PROC 2
+ /* The objects referenced by this object can be */
+ /* pushed on the mark stack by invoking */
+ /* PROC(descr). ENV(descr) is passed as the */
+ /* last argument. */
+# define GC_MAKE_PROC(proc_index, env) \
+ (((((env) << GC_LOG_MAX_MARK_PROCS) \
+ | (proc_index)) << GC_DS_TAG_BITS) | GC_DS_PROC)
+#define GC_DS_PER_OBJECT 3 /* The real descriptor is at the */
+ /* byte displacement from the beginning of the */
+ /* object given by descr & ~DS_TAGS */
+ /* If the descriptor is negative, the real */
+ /* descriptor is at (*<object_start>) - */
+ /* (descr & ~DS_TAGS) - GC_INDIR_PER_OBJ_BIAS */
+ /* The latter alternative can be used if each */
+ /* object contains a type descriptor in the */
+ /* first word. */
+ /* Note that in multithreaded environments */
+ /* per object descriptors maust be located in */
+ /* either the first two or last two words of */
+ /* the object, since only those are guaranteed */
+ /* to be cleared while the allocation lock is */
+ /* held. */
+#define GC_INDIR_PER_OBJ_BIAS 0x10
+
+extern GC_PTR GC_least_plausible_heap_addr;
+extern GC_PTR GC_greatest_plausible_heap_addr;
+ /* Bounds on the heap. Guaranteed valid */
+ /* Likely to include future heap expansion. */
+
+/* Handle nested references in a custom mark procedure. */
+/* Check if obj is a valid object. If so, ensure that it is marked. */
+/* If it was not previously marked, push its contents onto the mark */
+/* stack for future scanning. The object will then be scanned using */
+/* its mark descriptor. */
+/* Returns the new mark stack pointer. */
+/* Handles mark stack overflows correctly. */
+/* Since this marks first, it makes progress even if there are mark */
+/* stack overflows. */
+/* Src is the address of the pointer to obj, which is used only */
+/* for back pointer-based heap debugging. */
+/* It is strongly recommended that most objects be handled without mark */
+/* procedures, e.g. with bitmap descriptors, and that mark procedures */
+/* be reserved for exceptional cases. That will ensure that */
+/* performance of this call is not extremely performance critical. */
+/* (Otherwise we would need to inline GC_mark_and_push completely, */
+/* which would tie the client code to a fixed collector version.) */
+/* Note that mark procedures should explicitly call FIXUP_POINTER() */
+/* if required. */
+struct GC_ms_entry *GC_mark_and_push
+ GC_PROTO((GC_PTR obj,
+ struct GC_ms_entry * mark_stack_ptr,
+ struct GC_ms_entry * mark_stack_limit, GC_PTR *src));
+
+#define GC_MARK_AND_PUSH(obj, msp, lim, src) \
+ (((GC_word)obj >= (GC_word)GC_least_plausible_heap_addr && \
+ (GC_word)obj <= (GC_word)GC_greatest_plausible_heap_addr)? \
+ GC_mark_and_push(obj, msp, lim, src) : \
+ msp)
+
+extern size_t GC_debug_header_size;
+ /* The size of the header added to objects allocated through */
+ /* the GC_debug routines. */
+ /* Defined as a variable so that client mark procedures don't */
+ /* need to be recompiled for collector version changes. */
+#define GC_USR_PTR_FROM_BASE(p) ((GC_PTR)((char *)(p) + GC_debug_header_size))
+
+/* And some routines to support creation of new "kinds", e.g. with */
+/* custom mark procedures, by language runtimes. */
+/* The _inner versions assume the caller holds the allocation lock. */
+
+/* Return a new free list array. */
+void ** GC_new_free_list GC_PROTO((void));
+void ** GC_new_free_list_inner GC_PROTO((void));
+
+/* Return a new kind, as specified. */
+int GC_new_kind GC_PROTO((void **free_list, GC_word mark_descriptor_template,
+ int add_size_to_descriptor, int clear_new_objects));
+ /* The last two parameters must be zero or one. */
+int GC_new_kind_inner GC_PROTO((void **free_list,
+ GC_word mark_descriptor_template,
+ int add_size_to_descriptor,
+ int clear_new_objects));
+
+/* Return a new mark procedure identifier, suitable for use as */
+/* the first argument in GC_MAKE_PROC. */
+int GC_new_proc GC_PROTO((GC_mark_proc));
+int GC_new_proc_inner GC_PROTO((GC_mark_proc));
+
+/* Allocate an object of a given kind. Note that in multithreaded */
+/* contexts, this is usually unsafe for kinds that have the descriptor */
+/* in the object itself, since there is otherwise a window in which */
+/* the descriptor is not correct. Even in the single-threaded case, */
+/* we need to be sure that cleared objects on a free list don't */
+/* cause a GC crash if they are accidentally traced. */
+/* ptr_t */char * GC_generic_malloc GC_PROTO((GC_word lb, int k));
+
+/* FIXME - Should return void *, but that requires other changes. */
+
+typedef void (*GC_describe_type_fn) GC_PROTO((void *p, char *out_buf));
+ /* A procedure which */
+ /* produces a human-readable */
+ /* description of the "type" of object */
+ /* p into the buffer out_buf of length */
+ /* GC_TYPE_DESCR_LEN. This is used by */
+ /* the debug support when printing */
+ /* objects. */
+ /* These functions should be as robust */
+ /* as possible, though we do avoid */
+ /* invoking them on objects on the */
+ /* global free list. */
+# define GC_TYPE_DESCR_LEN 40
+
+void GC_register_describe_type_fn GC_PROTO((int kind, GC_describe_type_fn knd));
+ /* Register a describe_type function */
+ /* to be used when printing objects */
+ /* of a particular kind. */
+
+#endif /* GC_MARK_H */
+
Added: llvm-gcc-4.2/trunk/boehm-gc/include/gc_pthread_redirects.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/gc_pthread_redirects.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/gc_pthread_redirects.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/gc_pthread_redirects.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,82 @@
+/* Our pthread support normally needs to intercept a number of thread */
+/* calls. We arrange to do that here, if appropriate. */
+
+#ifndef GC_PTHREAD_REDIRECTS_H
+
+#define GC_PTHREAD_REDIRECTS_H
+
+#if defined(GC_SOLARIS_THREADS)
+/* We need to intercept calls to many of the threads primitives, so */
+/* that we can locate thread stacks and stop the world. */
+/* Note also that the collector cannot see thread specific data. */
+/* Thread specific data should generally consist of pointers to */
+/* uncollectable objects (allocated with GC_malloc_uncollectable, */
+/* not the system malloc), which are deallocated using the destructor */
+/* facility in thr_keycreate. Alternatively, keep a redundant pointer */
+/* to thread specific data on the thread stack. */
+# include <thread.h>
+ int GC_thr_create(void *stack_base, size_t stack_size,
+ void *(*start_routine)(void *), void *arg, long flags,
+ thread_t *new_thread);
+ int GC_thr_join(thread_t wait_for, thread_t *departed, void **status);
+ int GC_thr_suspend(thread_t target_thread);
+ int GC_thr_continue(thread_t target_thread);
+ void * GC_dlopen(const char *path, int mode);
+# define thr_create GC_thr_create
+# define thr_join GC_thr_join
+# define thr_suspend GC_thr_suspend
+# define thr_continue GC_thr_continue
+#endif /* GC_SOLARIS_THREADS */
+
+#if defined(GC_SOLARIS_PTHREADS)
+# include <pthread.h>
+# include <signal.h>
+ extern int GC_pthread_create(pthread_t *new_thread,
+ const pthread_attr_t *attr,
+ void * (*thread_execp)(void *), void *arg);
+ extern int GC_pthread_join(pthread_t wait_for, void **status);
+# define pthread_join GC_pthread_join
+# define pthread_create GC_pthread_create
+#endif
+
+#if defined(GC_SOLARIS_PTHREADS) || defined(GC_SOLARIS_THREADS)
+# define dlopen GC_dlopen
+#endif /* SOLARIS_THREADS || SOLARIS_PTHREADS */
+
+
+#if !defined(GC_USE_LD_WRAP) && defined(GC_PTHREADS) && !defined(GC_SOLARIS_PTHREADS)
+/* We treat these similarly. */
+# include <pthread.h>
+# include <signal.h>
+
+ int GC_pthread_create(pthread_t *new_thread,
+ const pthread_attr_t *attr,
+ void *(*start_routine)(void *), void *arg);
+#ifndef GC_DARWIN_THREADS
+ int GC_pthread_sigmask(int how, const sigset_t *set, sigset_t *oset);
+#endif
+ int GC_pthread_join(pthread_t thread, void **retval);
+ int GC_pthread_detach(pthread_t thread);
+
+#if defined(GC_OSF1_THREADS) \
+ && defined(_PTHREAD_USE_MANGLED_NAMES_) && !defined(_PTHREAD_USE_PTDNAM_)
+/* Unless the compiler supports #pragma extern_prefix, the Tru64 UNIX
+ <pthread.h> redefines some POSIX thread functions to use mangled names.
+ If so, undef them before redefining. */
+# undef pthread_create
+# undef pthread_join
+# undef pthread_detach
+#endif
+
+# define pthread_create GC_pthread_create
+# define pthread_join GC_pthread_join
+# define pthread_detach GC_pthread_detach
+
+#ifndef GC_DARWIN_THREADS
+# define pthread_sigmask GC_pthread_sigmask
+# define dlopen GC_dlopen
+#endif
+
+#endif /* GC_xxxxx_THREADS */
+
+#endif /* GC_PTHREAD_REDIRECTS_H */
Added: llvm-gcc-4.2/trunk/boehm-gc/include/gc_typed.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/gc_typed.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/gc_typed.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/gc_typed.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,113 @@
+/*
+ * Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
+ * Copyright (c) 1991-1994 by Xerox Corporation. All rights reserved.
+ * Copyright 1996 Silicon Graphics. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ */
+/*
+ * Some simple primitives for allocation with explicit type information.
+ * Facilities for dynamic type inference may be added later.
+ * Should be used only for extremely performance critical applications,
+ * or if conservative collector leakage is otherwise a problem (unlikely).
+ * Note that this is implemented completely separately from the rest
+ * of the collector, and is not linked in unless referenced.
+ * This does not currently support GC_DEBUG in any interesting way.
+ */
+/* Boehm, May 19, 1994 2:13 pm PDT */
+
+#ifndef _GC_TYPED_H
+# define _GC_TYPED_H
+# ifndef _GC_H
+# include "gc.h"
+# endif
+
+#ifdef __cplusplus
+ extern "C" {
+#endif
+typedef GC_word * GC_bitmap;
+ /* The least significant bit of the first word is one if */
+ /* the first word in the object may be a pointer. */
+
+# define GC_WORDSZ (8*sizeof(GC_word))
+# define GC_get_bit(bm, index) \
+ (((bm)[index/GC_WORDSZ] >> (index%GC_WORDSZ)) & 1)
+# define GC_set_bit(bm, index) \
+ (bm)[index/GC_WORDSZ] |= ((GC_word)1 << (index%GC_WORDSZ))
+# define GC_WORD_OFFSET(t, f) (offsetof(t,f)/sizeof(GC_word))
+# define GC_WORD_LEN(t) (sizeof(t)/ sizeof(GC_word))
+# define GC_BITMAP_SIZE(t) ((GC_WORD_LEN(t) + GC_WORDSZ-1)/GC_WORDSZ)
+
+typedef GC_word GC_descr;
+
+GC_API GC_descr GC_make_descriptor GC_PROTO((GC_bitmap bm, size_t len));
+ /* Return a type descriptor for the object whose layout */
+ /* is described by the argument. */
+ /* The least significant bit of the first word is one */
+ /* if the first word in the object may be a pointer. */
+ /* The second argument specifies the number of */
+ /* meaningful bits in the bitmap. The actual object */
+ /* may be larger (but not smaller). Any additional */
+ /* words in the object are assumed not to contain */
+ /* pointers. */
+ /* Returns a conservative approximation in the */
+ /* (unlikely) case of insufficient memory to build */
+ /* the descriptor. Calls to GC_make_descriptor */
+ /* may consume some amount of a finite resource. This */
+ /* is intended to be called once per type, not once */
+ /* per allocation. */
+
+/* It is possible to generate a descriptor for a C type T with */
+/* word aligned pointer fields f1, f2, ... as follows: */
+/* */
+/* GC_descr T_descr; */
+/* GC_word T_bitmap[GC_BITMAP_SIZE(T)] = {0}; */
+/* GC_set_bit(T_bitmap, GC_WORD_OFFSET(T,f1)); */
+/* GC_set_bit(T_bitmap, GC_WORD_OFFSET(T,f2)); */
+/* ... */
+/* T_descr = GC_make_descriptor(T_bitmap, GC_WORD_LEN(T)); */
+
+GC_API GC_PTR GC_malloc_explicitly_typed
+ GC_PROTO((size_t size_in_bytes, GC_descr d));
+ /* Allocate an object whose layout is described by d. */
+ /* The resulting object MAY NOT BE PASSED TO REALLOC. */
+ /* The returned object is cleared. */
+
+GC_API GC_PTR GC_malloc_explicitly_typed_ignore_off_page
+ GC_PROTO((size_t size_in_bytes, GC_descr d));
+
+GC_API GC_PTR GC_calloc_explicitly_typed
+ GC_PROTO((size_t nelements,
+ size_t element_size_in_bytes,
+ GC_descr d));
+ /* Allocate an array of nelements elements, each of the */
+ /* given size, and with the given descriptor. */
+ /* The elemnt size must be a multiple of the byte */
+ /* alignment required for pointers. E.g. on a 32-bit */
+ /* machine with 16-bit aligned pointers, size_in_bytes */
+ /* must be a multiple of 2. */
+ /* Returned object is cleared. */
+
+#ifdef GC_DEBUG
+# define GC_MALLOC_EXPLICITLY_TYPED(bytes, d) GC_MALLOC(bytes)
+# define GC_CALLOC_EXPLICITLY_TYPED(n, bytes, d) GC_MALLOC(n*bytes)
+#else
+# define GC_MALLOC_EXPLICITLY_TYPED(bytes, d) \
+ GC_malloc_explicitly_typed(bytes, d)
+# define GC_CALLOC_EXPLICITLY_TYPED(n, bytes, d) \
+ GC_calloc_explicitly_typed(n, bytes, d)
+#endif /* !GC_DEBUG */
+
+#ifdef __cplusplus
+ } /* matches extern "C" */
+#endif
+
+#endif /* _GC_TYPED_H */
+
Added: llvm-gcc-4.2/trunk/boehm-gc/include/javaxfc.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/javaxfc.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/javaxfc.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/javaxfc.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,21 @@
+# ifndef GC_H
+# include "gc.h"
+# endif
+
+/*
+ * Invoke all remaining finalizers that haven't yet been run.
+ * This is needed for strict compliance with the Java standard,
+ * which can make the runtime guarantee that all finalizers are run.
+ * This is problematic for several reasons:
+ * 1) It means that finalizers, and all methods calle by them,
+ * must be prepared to deal with objects that have been finalized in
+ * spite of the fact that they are still referenced by statically
+ * allocated pointer variables.
+ * 1) It may mean that we get stuck in an infinite loop running
+ * finalizers which create new finalizable objects, though that's
+ * probably unlikely.
+ * Thus this is not recommended for general use.
+ */
+void GC_finalize_all();
+
+
Added: llvm-gcc-4.2/trunk/boehm-gc/include/leak_detector.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/leak_detector.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/leak_detector.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/leak_detector.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,7 @@
+#define GC_DEBUG
+#include "gc.h"
+#define malloc(n) GC_MALLOC(n)
+#define calloc(m,n) GC_MALLOC((m)*(n))
+#define free(p) GC_FREE(p)
+#define realloc(p,n) GC_REALLOC((p),(n))
+#define CHECK_LEAKS() GC_gcollect()
Added: llvm-gcc-4.2/trunk/boehm-gc/include/new_gc_alloc.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/new_gc_alloc.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/new_gc_alloc.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/new_gc_alloc.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,480 @@
+/*
+ * Copyright (c) 1996-1998 by Silicon Graphics. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ */
+
+//
+// This is a revision of gc_alloc.h for SGI STL versions > 3.0
+// Unlike earlier versions, it supplements the standard "alloc.h"
+// instead of replacing it.
+//
+// This is sloppy about variable names used in header files.
+// It also doesn't yet understand the new header file names or
+// namespaces.
+//
+// This assumes the collector has been compiled with -DATOMIC_UNCOLLECTABLE.
+// The user should also consider -DREDIRECT_MALLOC=GC_uncollectable_malloc,
+// to ensure that object allocated through malloc are traced.
+//
+// Some of this could be faster in the explicit deallocation case.
+// In particular, we spend too much time clearing objects on the
+// free lists. That could be avoided.
+//
+// This uses template classes with static members, and hence does not work
+// with g++ 2.7.2 and earlier.
+//
+// Unlike its predecessor, this one simply defines
+// gc_alloc
+// single_client_gc_alloc
+// traceable_alloc
+// single_client_traceable_alloc
+//
+// It does not redefine alloc. Nor does it change the default allocator,
+// though the user may wish to do so. (The argument against changing
+// the default allocator is that it may introduce subtle link compatibility
+// problems. The argument for changing it is that the usual default
+// allocator is usually a very bad choice for a garbage collected environment.)
+//
+// This code assumes that the collector itself has been compiled with a
+// compiler that defines __STDC__ .
+//
+
+#ifndef GC_ALLOC_H
+
+#include "gc.h"
+
+#if (__GNUC__ < 3)
+# include <stack> // A more portable way to get stl_alloc.h .
+#else
+# include <bits/stl_alloc.h>
+# ifndef __STL_BEGIN_NAMESPACE
+# define __STL_BEGIN_NAMESPACE namespace std {
+# define __STL_END_NAMESPACE };
+# endif
+#ifndef __STL_USE_STD_ALLOCATORS
+#define __STL_USE_STD_ALLOCATORS
+#endif
+#endif
+
+/* A hack to deal with gcc 3.1. If you are using gcc3.1 and later, */
+/* you should probably really use gc_allocator.h instead. */
+#if defined (__GNUC__) && \
+ (__GNUC > 3 || (__GNUC__ == 3 && (__GNUC_MINOR__ >= 1)))
+# define simple_alloc __simple_alloc
+#endif
+
+
+
+#define GC_ALLOC_H
+
+#include <stddef.h>
+#include <string.h>
+
+// The following need to match collector data structures.
+// We can't include gc_priv.h, since that pulls in way too much stuff.
+// This should eventually be factored out into another include file.
+
+extern "C" {
+ extern void ** const GC_objfreelist_ptr;
+ extern void ** const GC_aobjfreelist_ptr;
+ extern void ** const GC_uobjfreelist_ptr;
+ extern void ** const GC_auobjfreelist_ptr;
+
+ extern void GC_incr_words_allocd(size_t words);
+ extern void GC_incr_mem_freed(size_t words);
+
+ extern char * GC_generic_malloc_words_small(size_t word, int kind);
+}
+
+// Object kinds; must match PTRFREE, NORMAL, UNCOLLECTABLE, and
+// AUNCOLLECTABLE in gc_priv.h.
+
+enum { GC_PTRFREE = 0, GC_NORMAL = 1, GC_UNCOLLECTABLE = 2,
+ GC_AUNCOLLECTABLE = 3 };
+
+enum { GC_max_fast_bytes = 255 };
+
+enum { GC_bytes_per_word = sizeof(char *) };
+
+enum { GC_byte_alignment = 8 };
+
+enum { GC_word_alignment = GC_byte_alignment/GC_bytes_per_word };
+
+inline void * &GC_obj_link(void * p)
+{ return *reinterpret_cast<void **>(p); }
+
+// Compute a number of words >= n+1 bytes.
+// The +1 allows for pointers one past the end.
+inline size_t GC_round_up(size_t n)
+{
+ return ((n + GC_byte_alignment)/GC_byte_alignment)*GC_word_alignment;
+}
+
+// The same but don't allow for extra byte.
+inline size_t GC_round_up_uncollectable(size_t n)
+{
+ return ((n + GC_byte_alignment - 1)/GC_byte_alignment)*GC_word_alignment;
+}
+
+template <int dummy>
+class GC_aux_template {
+public:
+ // File local count of allocated words. Occasionally this is
+ // added into the global count. A separate count is necessary since the
+ // real one must be updated with a procedure call.
+ static size_t GC_words_recently_allocd;
+
+ // Same for uncollectable mmory. Not yet reflected in either
+ // GC_words_recently_allocd or GC_non_gc_bytes.
+ static size_t GC_uncollectable_words_recently_allocd;
+
+ // Similar counter for explicitly deallocated memory.
+ static size_t GC_mem_recently_freed;
+
+ // Again for uncollectable memory.
+ static size_t GC_uncollectable_mem_recently_freed;
+
+ static void * GC_out_of_line_malloc(size_t nwords, int kind);
+};
+
+template <int dummy>
+size_t GC_aux_template<dummy>::GC_words_recently_allocd = 0;
+
+template <int dummy>
+size_t GC_aux_template<dummy>::GC_uncollectable_words_recently_allocd = 0;
+
+template <int dummy>
+size_t GC_aux_template<dummy>::GC_mem_recently_freed = 0;
+
+template <int dummy>
+size_t GC_aux_template<dummy>::GC_uncollectable_mem_recently_freed = 0;
+
+template <int dummy>
+void * GC_aux_template<dummy>::GC_out_of_line_malloc(size_t nwords, int kind)
+{
+ GC_words_recently_allocd += GC_uncollectable_words_recently_allocd;
+ GC_non_gc_bytes +=
+ GC_bytes_per_word * GC_uncollectable_words_recently_allocd;
+ GC_uncollectable_words_recently_allocd = 0;
+
+ GC_mem_recently_freed += GC_uncollectable_mem_recently_freed;
+ GC_non_gc_bytes -=
+ GC_bytes_per_word * GC_uncollectable_mem_recently_freed;
+ GC_uncollectable_mem_recently_freed = 0;
+
+ GC_incr_words_allocd(GC_words_recently_allocd);
+ GC_words_recently_allocd = 0;
+
+ GC_incr_mem_freed(GC_mem_recently_freed);
+ GC_mem_recently_freed = 0;
+
+ return GC_generic_malloc_words_small(nwords, kind);
+}
+
+typedef GC_aux_template<0> GC_aux;
+
+// A fast, single-threaded, garbage-collected allocator
+// We assume the first word will be immediately overwritten.
+// In this version, deallocation is not a noop, and explicit
+// deallocation is likely to help performance.
+template <int dummy>
+class single_client_gc_alloc_template {
+ public:
+ static void * allocate(size_t n)
+ {
+ size_t nwords = GC_round_up(n);
+ void ** flh;
+ void * op;
+
+ if (n > GC_max_fast_bytes) return GC_malloc(n);
+ flh = GC_objfreelist_ptr + nwords;
+ if (0 == (op = *flh)) {
+ return GC_aux::GC_out_of_line_malloc(nwords, GC_NORMAL);
+ }
+ *flh = GC_obj_link(op);
+ GC_aux::GC_words_recently_allocd += nwords;
+ return op;
+ }
+ static void * ptr_free_allocate(size_t n)
+ {
+ size_t nwords = GC_round_up(n);
+ void ** flh;
+ void * op;
+
+ if (n > GC_max_fast_bytes) return GC_malloc_atomic(n);
+ flh = GC_aobjfreelist_ptr + nwords;
+ if (0 == (op = *flh)) {
+ return GC_aux::GC_out_of_line_malloc(nwords, GC_PTRFREE);
+ }
+ *flh = GC_obj_link(op);
+ GC_aux::GC_words_recently_allocd += nwords;
+ return op;
+ }
+ static void deallocate(void *p, size_t n)
+ {
+ size_t nwords = GC_round_up(n);
+ void ** flh;
+
+ if (n > GC_max_fast_bytes) {
+ GC_free(p);
+ } else {
+ flh = GC_objfreelist_ptr + nwords;
+ GC_obj_link(p) = *flh;
+ memset(reinterpret_cast<char *>(p) + GC_bytes_per_word, 0,
+ GC_bytes_per_word * (nwords - 1));
+ *flh = p;
+ GC_aux::GC_mem_recently_freed += nwords;
+ }
+ }
+ static void ptr_free_deallocate(void *p, size_t n)
+ {
+ size_t nwords = GC_round_up(n);
+ void ** flh;
+
+ if (n > GC_max_fast_bytes) {
+ GC_free(p);
+ } else {
+ flh = GC_aobjfreelist_ptr + nwords;
+ GC_obj_link(p) = *flh;
+ *flh = p;
+ GC_aux::GC_mem_recently_freed += nwords;
+ }
+ }
+};
+
+typedef single_client_gc_alloc_template<0> single_client_gc_alloc;
+
+// Once more, for uncollectable objects.
+template <int dummy>
+class single_client_traceable_alloc_template {
+ public:
+ static void * allocate(size_t n)
+ {
+ size_t nwords = GC_round_up_uncollectable(n);
+ void ** flh;
+ void * op;
+
+ if (n > GC_max_fast_bytes) return GC_malloc_uncollectable(n);
+ flh = GC_uobjfreelist_ptr + nwords;
+ if (0 == (op = *flh)) {
+ return GC_aux::GC_out_of_line_malloc(nwords, GC_UNCOLLECTABLE);
+ }
+ *flh = GC_obj_link(op);
+ GC_aux::GC_uncollectable_words_recently_allocd += nwords;
+ return op;
+ }
+ static void * ptr_free_allocate(size_t n)
+ {
+ size_t nwords = GC_round_up_uncollectable(n);
+ void ** flh;
+ void * op;
+
+ if (n > GC_max_fast_bytes) return GC_malloc_atomic_uncollectable(n);
+ flh = GC_auobjfreelist_ptr + nwords;
+ if (0 == (op = *flh)) {
+ return GC_aux::GC_out_of_line_malloc(nwords, GC_AUNCOLLECTABLE);
+ }
+ *flh = GC_obj_link(op);
+ GC_aux::GC_uncollectable_words_recently_allocd += nwords;
+ return op;
+ }
+ static void deallocate(void *p, size_t n)
+ {
+ size_t nwords = GC_round_up_uncollectable(n);
+ void ** flh;
+
+ if (n > GC_max_fast_bytes) {
+ GC_free(p);
+ } else {
+ flh = GC_uobjfreelist_ptr + nwords;
+ GC_obj_link(p) = *flh;
+ *flh = p;
+ GC_aux::GC_uncollectable_mem_recently_freed += nwords;
+ }
+ }
+ static void ptr_free_deallocate(void *p, size_t n)
+ {
+ size_t nwords = GC_round_up_uncollectable(n);
+ void ** flh;
+
+ if (n > GC_max_fast_bytes) {
+ GC_free(p);
+ } else {
+ flh = GC_auobjfreelist_ptr + nwords;
+ GC_obj_link(p) = *flh;
+ *flh = p;
+ GC_aux::GC_uncollectable_mem_recently_freed += nwords;
+ }
+ }
+};
+
+typedef single_client_traceable_alloc_template<0> single_client_traceable_alloc;
+
+template < int dummy >
+class gc_alloc_template {
+ public:
+ static void * allocate(size_t n) { return GC_malloc(n); }
+ static void * ptr_free_allocate(size_t n)
+ { return GC_malloc_atomic(n); }
+ static void deallocate(void *, size_t) { }
+ static void ptr_free_deallocate(void *, size_t) { }
+};
+
+typedef gc_alloc_template < 0 > gc_alloc;
+
+template < int dummy >
+class traceable_alloc_template {
+ public:
+ static void * allocate(size_t n) { return GC_malloc_uncollectable(n); }
+ static void * ptr_free_allocate(size_t n)
+ { return GC_malloc_atomic_uncollectable(n); }
+ static void deallocate(void *p, size_t) { GC_free(p); }
+ static void ptr_free_deallocate(void *p, size_t) { GC_free(p); }
+};
+
+typedef traceable_alloc_template < 0 > traceable_alloc;
+
+// We want to specialize simple_alloc so that it does the right thing
+// for all pointerfree types. At the moment there is no portable way to
+// even approximate that. The following approximation should work for
+// SGI compilers, and recent versions of g++.
+
+# define __GC_SPECIALIZE(T,alloc) \
+class simple_alloc<T, alloc> { \
+public: \
+ static T *allocate(size_t n) \
+ { return 0 == n? 0 : \
+ reinterpret_cast<T*>(alloc::ptr_free_allocate(n * sizeof (T))); } \
+ static T *allocate(void) \
+ { return reinterpret_cast<T*>(alloc::ptr_free_allocate(sizeof (T))); } \
+ static void deallocate(T *p, size_t n) \
+ { if (0 != n) alloc::ptr_free_deallocate(p, n * sizeof (T)); } \
+ static void deallocate(T *p) \
+ { alloc::ptr_free_deallocate(p, sizeof (T)); } \
+};
+
+__STL_BEGIN_NAMESPACE
+
+__GC_SPECIALIZE(char, gc_alloc)
+__GC_SPECIALIZE(int, gc_alloc)
+__GC_SPECIALIZE(unsigned, gc_alloc)
+__GC_SPECIALIZE(float, gc_alloc)
+__GC_SPECIALIZE(double, gc_alloc)
+
+__GC_SPECIALIZE(char, traceable_alloc)
+__GC_SPECIALIZE(int, traceable_alloc)
+__GC_SPECIALIZE(unsigned, traceable_alloc)
+__GC_SPECIALIZE(float, traceable_alloc)
+__GC_SPECIALIZE(double, traceable_alloc)
+
+__GC_SPECIALIZE(char, single_client_gc_alloc)
+__GC_SPECIALIZE(int, single_client_gc_alloc)
+__GC_SPECIALIZE(unsigned, single_client_gc_alloc)
+__GC_SPECIALIZE(float, single_client_gc_alloc)
+__GC_SPECIALIZE(double, single_client_gc_alloc)
+
+__GC_SPECIALIZE(char, single_client_traceable_alloc)
+__GC_SPECIALIZE(int, single_client_traceable_alloc)
+__GC_SPECIALIZE(unsigned, single_client_traceable_alloc)
+__GC_SPECIALIZE(float, single_client_traceable_alloc)
+__GC_SPECIALIZE(double, single_client_traceable_alloc)
+
+__STL_END_NAMESPACE
+
+#ifdef __STL_USE_STD_ALLOCATORS
+
+__STL_BEGIN_NAMESPACE
+
+template <class _Tp>
+struct _Alloc_traits<_Tp, gc_alloc >
+{
+ static const bool _S_instanceless = true;
+ typedef simple_alloc<_Tp, gc_alloc > _Alloc_type;
+ typedef __allocator<_Tp, gc_alloc > allocator_type;
+};
+
+inline bool operator==(const gc_alloc&,
+ const gc_alloc&)
+{
+ return true;
+}
+
+inline bool operator!=(const gc_alloc&,
+ const gc_alloc&)
+{
+ return false;
+}
+
+template <class _Tp>
+struct _Alloc_traits<_Tp, single_client_gc_alloc >
+{
+ static const bool _S_instanceless = true;
+ typedef simple_alloc<_Tp, single_client_gc_alloc > _Alloc_type;
+ typedef __allocator<_Tp, single_client_gc_alloc > allocator_type;
+};
+
+inline bool operator==(const single_client_gc_alloc&,
+ const single_client_gc_alloc&)
+{
+ return true;
+}
+
+inline bool operator!=(const single_client_gc_alloc&,
+ const single_client_gc_alloc&)
+{
+ return false;
+}
+
+template <class _Tp>
+struct _Alloc_traits<_Tp, traceable_alloc >
+{
+ static const bool _S_instanceless = true;
+ typedef simple_alloc<_Tp, traceable_alloc > _Alloc_type;
+ typedef __allocator<_Tp, traceable_alloc > allocator_type;
+};
+
+inline bool operator==(const traceable_alloc&,
+ const traceable_alloc&)
+{
+ return true;
+}
+
+inline bool operator!=(const traceable_alloc&,
+ const traceable_alloc&)
+{
+ return false;
+}
+
+template <class _Tp>
+struct _Alloc_traits<_Tp, single_client_traceable_alloc >
+{
+ static const bool _S_instanceless = true;
+ typedef simple_alloc<_Tp, single_client_traceable_alloc > _Alloc_type;
+ typedef __allocator<_Tp, single_client_traceable_alloc > allocator_type;
+};
+
+inline bool operator==(const single_client_traceable_alloc&,
+ const single_client_traceable_alloc&)
+{
+ return true;
+}
+
+inline bool operator!=(const single_client_traceable_alloc&,
+ const single_client_traceable_alloc&)
+{
+ return false;
+}
+
+__STL_END_NAMESPACE
+
+#endif /* __STL_USE_STD_ALLOCATORS */
+
+#endif /* GC_ALLOC_H */
Added: llvm-gcc-4.2/trunk/boehm-gc/include/private/cord_pos.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/private/cord_pos.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/private/cord_pos.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/private/cord_pos.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,118 @@
+/*
+ * Copyright (c) 1993-1994 by Xerox Corporation. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ */
+/* Boehm, May 19, 1994 2:23 pm PDT */
+# ifndef CORD_POSITION_H
+
+/* The representation of CORD_position. This is private to the */
+/* implementation, but the size is known to clients. Also */
+/* the implementation of some exported macros relies on it. */
+/* Don't use anything defined here and not in cord.h. */
+
+# define MAX_DEPTH 48
+ /* The maximum depth of a balanced cord + 1. */
+ /* We don't let cords get deeper than MAX_DEPTH. */
+
+struct CORD_pe {
+ CORD pe_cord;
+ size_t pe_start_pos;
+};
+
+/* A structure describing an entry on the path from the root */
+/* to current position. */
+typedef struct CORD_Pos {
+ size_t cur_pos;
+ int path_len;
+# define CORD_POS_INVALID (0x55555555)
+ /* path_len == INVALID <==> position invalid */
+ const char *cur_leaf; /* Current leaf, if it is a string. */
+ /* If the current leaf is a function, */
+ /* then this may point to function_buf */
+ /* containing the next few characters. */
+ /* Always points to a valid string */
+ /* containing the current character */
+ /* unless cur_end is 0. */
+ size_t cur_start; /* Start position of cur_leaf */
+ size_t cur_end; /* Ending position of cur_leaf */
+ /* 0 if cur_leaf is invalid. */
+ struct CORD_pe path[MAX_DEPTH + 1];
+ /* path[path_len] is the leaf corresponding to cur_pos */
+ /* path[0].pe_cord is the cord we point to. */
+# define FUNCTION_BUF_SZ 8
+ char function_buf[FUNCTION_BUF_SZ]; /* Space for next few chars */
+ /* from function node. */
+} CORD_pos[1];
+
+/* Extract the cord from a position: */
+CORD CORD_pos_to_cord(CORD_pos p);
+
+/* Extract the current index from a position: */
+size_t CORD_pos_to_index(CORD_pos p);
+
+/* Fetch the character located at the given position: */
+char CORD_pos_fetch(CORD_pos p);
+
+/* Initialize the position to refer to the give cord and index. */
+/* Note that this is the most expensive function on positions: */
+void CORD_set_pos(CORD_pos p, CORD x, size_t i);
+
+/* Advance the position to the next character. */
+/* P must be initialized and valid. */
+/* Invalidates p if past end: */
+void CORD_next(CORD_pos p);
+
+/* Move the position to the preceding character. */
+/* P must be initialized and valid. */
+/* Invalidates p if past beginning: */
+void CORD_prev(CORD_pos p);
+
+/* Is the position valid, i.e. inside the cord? */
+int CORD_pos_valid(CORD_pos p);
+
+char CORD__pos_fetch(CORD_pos);
+void CORD__next(CORD_pos);
+void CORD__prev(CORD_pos);
+
+#define CORD_pos_fetch(p) \
+ (((p)[0].cur_end != 0)? \
+ (p)[0].cur_leaf[(p)[0].cur_pos - (p)[0].cur_start] \
+ : CORD__pos_fetch(p))
+
+#define CORD_next(p) \
+ (((p)[0].cur_pos + 1 < (p)[0].cur_end)? \
+ (p)[0].cur_pos++ \
+ : (CORD__next(p), 0))
+
+#define CORD_prev(p) \
+ (((p)[0].cur_end != 0 && (p)[0].cur_pos > (p)[0].cur_start)? \
+ (p)[0].cur_pos-- \
+ : (CORD__prev(p), 0))
+
+#define CORD_pos_to_index(p) ((p)[0].cur_pos)
+
+#define CORD_pos_to_cord(p) ((p)[0].path[0].pe_cord)
+
+#define CORD_pos_valid(p) ((p)[0].path_len != CORD_POS_INVALID)
+
+/* Some grubby stuff for performance-critical friends: */
+#define CORD_pos_chars_left(p) ((long)((p)[0].cur_end) - (long)((p)[0].cur_pos))
+ /* Number of characters in cache. <= 0 ==> none */
+
+#define CORD_pos_advance(p,n) ((p)[0].cur_pos += (n) - 1, CORD_next(p))
+ /* Advance position by n characters */
+ /* 0 < n < CORD_pos_chars_left(p) */
+
+#define CORD_pos_cur_char_addr(p) \
+ (p)[0].cur_leaf + ((p)[0].cur_pos - (p)[0].cur_start)
+ /* address of current character in cache. */
+
+#endif
Added: llvm-gcc-4.2/trunk/boehm-gc/include/private/darwin_semaphore.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/private/darwin_semaphore.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/private/darwin_semaphore.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/private/darwin_semaphore.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,68 @@
+#ifndef GC_DARWIN_SEMAPHORE_H
+#define GC_DARWIN_SEMAPHORE_H
+
+#if !defined(GC_DARWIN_THREADS)
+#error darwin_semaphore.h included with GC_DARWIN_THREADS not defined
+#endif
+
+/*
+ This is a very simple semaphore implementation for darwin. It
+ is implemented in terms of pthreads calls so it isn't async signal
+ safe. This isn't a problem because signals aren't used to
+ suspend threads on darwin.
+*/
+
+typedef struct {
+ pthread_mutex_t mutex;
+ pthread_cond_t cond;
+ int value;
+} sem_t;
+
+static int sem_init(sem_t *sem, int pshared, int value) {
+ int ret;
+ if(pshared)
+ GC_abort("sem_init with pshared set");
+ sem->value = value;
+
+ ret = pthread_mutex_init(&sem->mutex,NULL);
+ if(ret < 0) return -1;
+ ret = pthread_cond_init(&sem->cond,NULL);
+ if(ret < 0) return -1;
+ return 0;
+}
+
+static int sem_post(sem_t *sem) {
+ if(pthread_mutex_lock(&sem->mutex) < 0)
+ return -1;
+ sem->value++;
+ if(pthread_cond_signal(&sem->cond) < 0) {
+ pthread_mutex_unlock(&sem->mutex);
+ return -1;
+ }
+ if(pthread_mutex_unlock(&sem->mutex) < 0)
+ return -1;
+ return 0;
+}
+
+static int sem_wait(sem_t *sem) {
+ if(pthread_mutex_lock(&sem->mutex) < 0)
+ return -1;
+ while(sem->value == 0) {
+ pthread_cond_wait(&sem->cond,&sem->mutex);
+ }
+ sem->value--;
+ if(pthread_mutex_unlock(&sem->mutex) < 0)
+ return -1;
+ return 0;
+}
+
+static int sem_destroy(sem_t *sem) {
+ int ret;
+ ret = pthread_cond_destroy(&sem->cond);
+ if(ret < 0) return -1;
+ ret = pthread_mutex_destroy(&sem->mutex);
+ if(ret < 0) return -1;
+ return 0;
+}
+
+#endif
Added: llvm-gcc-4.2/trunk/boehm-gc/include/private/darwin_stop_world.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/private/darwin_stop_world.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/private/darwin_stop_world.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/private/darwin_stop_world.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,22 @@
+#ifndef GC_DARWIN_STOP_WORLD_H
+#define GC_DARWIN_STOP_WORLD_H
+
+#if !defined(GC_DARWIN_THREADS)
+#error darwin_stop_world.h included without GC_DARWIN_THREADS defined
+#endif
+
+#include <mach/mach.h>
+#include <mach/thread_act.h>
+
+struct thread_stop_info {
+ mach_port_t mach_thread;
+};
+
+struct GC_mach_thread {
+ thread_act_t thread;
+ int already_suspended;
+};
+
+void GC_darwin_register_mach_handler_thread(mach_port_t thread);
+
+#endif
Added: llvm-gcc-4.2/trunk/boehm-gc/include/private/dbg_mlc.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/private/dbg_mlc.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/private/dbg_mlc.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/private/dbg_mlc.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,175 @@
+/*
+ * Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
+ * Copyright (c) 1991-1995 by Xerox Corporation. All rights reserved.
+ * Copyright (c) 1997 by Silicon Graphics. All rights reserved.
+ * Copyright (c) 1999 by Hewlett-Packard Company. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ */
+
+/*
+ * This is mostly an internal header file. Typical clients should
+ * not use it. Clients that define their own object kinds with
+ * debugging allocators will probably want to include this, however.
+ * No attempt is made to keep the namespace clean. This should not be
+ * included from header files that are frequently included by clients.
+ */
+
+#ifndef _DBG_MLC_H
+
+#define _DBG_MLC_H
+
+# define I_HIDE_POINTERS
+# include "gc_priv.h"
+# ifdef KEEP_BACK_PTRS
+# include "gc_backptr.h"
+# endif
+
+#ifndef HIDE_POINTER
+ /* Gc.h was previously included, and hence the I_HIDE_POINTERS */
+ /* definition had no effect. Repeat the gc.h definitions here to */
+ /* get them anyway. */
+ typedef GC_word GC_hidden_pointer;
+# define HIDE_POINTER(p) (~(GC_hidden_pointer)(p))
+# define REVEAL_POINTER(p) ((GC_PTR)(HIDE_POINTER(p)))
+#endif /* HIDE_POINTER */
+
+# define START_FLAG ((word)0xfedcedcb)
+# define END_FLAG ((word)0xbcdecdef)
+ /* Stored both one past the end of user object, and one before */
+ /* the end of the object as seen by the allocator. */
+
+# if defined(KEEP_BACK_PTRS) || defined(PRINT_BLACK_LIST) \
+ || defined(MAKE_BACK_GRAPH)
+ /* Pointer "source"s that aren't real locations. */
+ /* Used in oh_back_ptr fields and as "source" */
+ /* argument to some marking functions. */
+# define NOT_MARKED (ptr_t)(0)
+# define MARKED_FOR_FINALIZATION (ptr_t)(2)
+ /* Object was marked because it is finalizable. */
+# define MARKED_FROM_REGISTER (ptr_t)(4)
+ /* Object was marked from a rgister. Hence the */
+ /* source of the reference doesn't have an address. */
+# endif /* KEEP_BACK_PTRS || PRINT_BLACK_LIST */
+
+/* Object header */
+typedef struct {
+# if defined(KEEP_BACK_PTRS) || defined(MAKE_BACK_GRAPH)
+ /* We potentially keep two different kinds of back */
+ /* pointers. KEEP_BACK_PTRS stores a single back */
+ /* pointer in each reachable object to allow reporting */
+ /* of why an object was retained. MAKE_BACK_GRAPH */
+ /* builds a graph containing the inverse of all */
+ /* "points-to" edges including those involving */
+ /* objects that have just become unreachable. This */
+ /* allows detection of growing chains of unreachable */
+ /* objects. It may be possible to eventually combine */
+ /* both, but for now we keep them separate. Both */
+ /* kinds of back pointers are hidden using the */
+ /* following macros. In both cases, the plain version */
+ /* is constrained to have an least significant bit of 1,*/
+ /* to allow it to be distinguished from a free list */
+ /* link. This means the plain version must have an */
+ /* lsb of 0. */
+ /* Note that blocks dropped by black-listing will */
+ /* also have the lsb clear once debugging has */
+ /* started. */
+ /* We're careful never to overwrite a value with lsb 0. */
+# if ALIGNMENT == 1
+ /* Fudge back pointer to be even. */
+# define HIDE_BACK_PTR(p) HIDE_POINTER(~1 & (GC_word)(p))
+# else
+# define HIDE_BACK_PTR(p) HIDE_POINTER(p)
+# endif
+
+# ifdef KEEP_BACK_PTRS
+ GC_hidden_pointer oh_back_ptr;
+# endif
+# ifdef MAKE_BACK_GRAPH
+ GC_hidden_pointer oh_bg_ptr;
+# endif
+# if defined(ALIGN_DOUBLE) && \
+ (defined(KEEP_BACK_PTRS) != defined(MAKE_BACK_GRAPH))
+ word oh_dummy;
+# endif
+# endif
+ GC_CONST char * oh_string; /* object descriptor string */
+ word oh_int; /* object descriptor integers */
+# ifdef NEED_CALLINFO
+ struct callinfo oh_ci[NFRAMES];
+# endif
+# ifndef SHORT_DBG_HDRS
+ word oh_sz; /* Original malloc arg. */
+ word oh_sf; /* start flag */
+# endif /* SHORT_DBG_HDRS */
+} oh;
+/* The size of the above structure is assumed not to dealign things, */
+/* and to be a multiple of the word length. */
+
+#ifdef SHORT_DBG_HDRS
+# define DEBUG_BYTES (sizeof (oh))
+# define UNCOLLECTABLE_DEBUG_BYTES DEBUG_BYTES
+#else
+ /* Add space for END_FLAG, but use any extra space that was already */
+ /* added to catch off-the-end pointers. */
+ /* For uncollectable objects, the extra byte is not added. */
+# define UNCOLLECTABLE_DEBUG_BYTES (sizeof (oh) + sizeof (word))
+# define DEBUG_BYTES (UNCOLLECTABLE_DEBUG_BYTES - EXTRA_BYTES)
+#endif
+
+/* Round bytes to words without adding extra byte at end. */
+#define SIMPLE_ROUNDED_UP_WORDS(n) BYTES_TO_WORDS((n) + WORDS_TO_BYTES(1) - 1)
+
+/* ADD_CALL_CHAIN stores a (partial) call chain into an object */
+/* header. It may be called with or without the allocation */
+/* lock. */
+/* PRINT_CALL_CHAIN prints the call chain stored in an object */
+/* to stderr. It requires that we do not hold the lock. */
+#ifdef SAVE_CALL_CHAIN
+# define ADD_CALL_CHAIN(base, ra) GC_save_callers(((oh *)(base)) -> oh_ci)
+# define PRINT_CALL_CHAIN(base) GC_print_callers(((oh *)(base)) -> oh_ci)
+#else
+# ifdef GC_ADD_CALLER
+# define ADD_CALL_CHAIN(base, ra) ((oh *)(base)) -> oh_ci[0].ci_pc = (ra)
+# define PRINT_CALL_CHAIN(base) GC_print_callers(((oh *)(base)) -> oh_ci)
+# else
+# define ADD_CALL_CHAIN(base, ra)
+# define PRINT_CALL_CHAIN(base)
+# endif
+#endif
+
+# ifdef GC_ADD_CALLER
+# define OPT_RA ra,
+# else
+# define OPT_RA
+# endif
+
+
+/* Check whether object with base pointer p has debugging info */
+/* p is assumed to point to a legitimate object in our part */
+/* of the heap. */
+#ifdef SHORT_DBG_HDRS
+# define GC_has_other_debug_info(p) TRUE
+#else
+ GC_bool GC_has_other_debug_info(/* p */);
+#endif
+
+#if defined(KEEP_BACK_PTRS) || defined(MAKE_BACK_GRAPH)
+# define GC_HAS_DEBUG_INFO(p) \
+ ((*((word *)p) & 1) && GC_has_other_debug_info(p))
+#else
+# define GC_HAS_DEBUG_INFO(p) GC_has_other_debug_info(p)
+#endif
+
+/* Store debugging info into p. Return displaced pointer. */
+/* Assumes we don't hold allocation lock. */
+ptr_t GC_store_debug_info(/* p, sz, string, integer */);
+
+#endif /* _DBG_MLC_H */
Added: llvm-gcc-4.2/trunk/boehm-gc/include/private/gc_hdrs.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/private/gc_hdrs.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/private/gc_hdrs.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/private/gc_hdrs.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,233 @@
+/*
+ * Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
+ * Copyright (c) 1991-1994 by Xerox Corporation. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ */
+/* Boehm, July 11, 1995 11:54 am PDT */
+# ifndef GC_HEADERS_H
+# define GC_HEADERS_H
+typedef struct hblkhdr hdr;
+
+# if CPP_WORDSZ != 32 && CPP_WORDSZ < 36
+ --> Get a real machine.
+# endif
+
+/*
+ * The 2 level tree data structure that is used to find block headers.
+ * If there are more than 32 bits in a pointer, the top level is a hash
+ * table.
+ *
+ * This defines HDR, GET_HDR, and SET_HDR, the main macros used to
+ * retrieve and set object headers.
+ *
+ * Since 5.0 alpha 5, we can also take advantage of a header lookup
+ * cache. This is a locally declared direct mapped cache, used inside
+ * the marker. The HC_GET_HDR macro uses and maintains this
+ * cache. Assuming we get reasonable hit rates, this shaves a few
+ * memory references from each pointer validation.
+ */
+
+# if CPP_WORDSZ > 32
+# define HASH_TL
+# endif
+
+/* Define appropriate out-degrees for each of the two tree levels */
+# ifdef SMALL_CONFIG
+# define LOG_BOTTOM_SZ 11
+ /* Keep top index size reasonable with smaller blocks. */
+# else
+# define LOG_BOTTOM_SZ 10
+# endif
+# ifndef HASH_TL
+# define LOG_TOP_SZ (WORDSZ - LOG_BOTTOM_SZ - LOG_HBLKSIZE)
+# else
+# define LOG_TOP_SZ 11
+# endif
+# define TOP_SZ (1 << LOG_TOP_SZ)
+# define BOTTOM_SZ (1 << LOG_BOTTOM_SZ)
+
+#ifndef SMALL_CONFIG
+# define USE_HDR_CACHE
+#endif
+
+/* #define COUNT_HDR_CACHE_HITS */
+
+extern hdr * GC_invalid_header; /* header for an imaginary block */
+ /* containing no objects. */
+
+
+/* Check whether p and corresponding hhdr point to long or invalid */
+/* object. If so, advance hhdr to */
+/* beginning of block, or set hhdr to GC_invalid_header. */
+#define ADVANCE(p, hhdr, source) \
+ { \
+ hdr * new_hdr = GC_invalid_header; \
+ p = GC_find_start(p, hhdr, &new_hdr); \
+ hhdr = new_hdr; \
+ }
+
+#ifdef USE_HDR_CACHE
+
+# ifdef COUNT_HDR_CACHE_HITS
+ extern word GC_hdr_cache_hits;
+ extern word GC_hdr_cache_misses;
+# define HC_HIT() ++GC_hdr_cache_hits
+# define HC_MISS() ++GC_hdr_cache_misses
+# else
+# define HC_HIT()
+# define HC_MISS()
+# endif
+
+ typedef struct hce {
+ word block_addr; /* right shifted by LOG_HBLKSIZE */
+ hdr * hce_hdr;
+ } hdr_cache_entry;
+
+# define HDR_CACHE_SIZE 8 /* power of 2 */
+
+# define DECLARE_HDR_CACHE \
+ hdr_cache_entry hdr_cache[HDR_CACHE_SIZE]
+
+# define INIT_HDR_CACHE BZERO(hdr_cache, sizeof(hdr_cache));
+
+# define HCE(h) hdr_cache + (((word)(h) >> LOG_HBLKSIZE) & (HDR_CACHE_SIZE-1))
+
+# define HCE_VALID_FOR(hce,h) ((hce) -> block_addr == \
+ ((word)(h) >> LOG_HBLKSIZE))
+
+# define HCE_HDR(h) ((hce) -> hce_hdr)
+
+
+/* Analogous to GET_HDR, except that in the case of large objects, it */
+/* Returns the header for the object beginning, and updates p. */
+/* Returns GC_invalid_header instead of 0. All of this saves a branch */
+/* in the fast path. */
+# define HC_GET_HDR(p, hhdr, source) \
+ { \
+ hdr_cache_entry * hce = HCE(p); \
+ if (HCE_VALID_FOR(hce, p)) { \
+ HC_HIT(); \
+ hhdr = hce -> hce_hdr; \
+ } else { \
+ HC_MISS(); \
+ GET_HDR(p, hhdr); \
+ if (IS_FORWARDING_ADDR_OR_NIL(hhdr)) { \
+ ADVANCE(p, hhdr, source); \
+ } else { \
+ hce -> block_addr = (word)(p) >> LOG_HBLKSIZE; \
+ hce -> hce_hdr = hhdr; \
+ } \
+ } \
+ }
+
+#else /* !USE_HDR_CACHE */
+
+# define DECLARE_HDR_CACHE
+
+# define INIT_HDR_CACHE
+
+# define HC_GET_HDR(p, hhdr, source) \
+ { \
+ GET_HDR(p, hhdr); \
+ if (IS_FORWARDING_ADDR_OR_NIL(hhdr)) { \
+ ADVANCE(p, hhdr, source); \
+ } \
+ }
+#endif
+
+typedef struct bi {
+ hdr * index[BOTTOM_SZ];
+ /*
+ * The bottom level index contains one of three kinds of values:
+ * 0 means we're not responsible for this block,
+ * or this is a block other than the first one in a free block.
+ * 1 < (long)X <= MAX_JUMP means the block starts at least
+ * X * HBLKSIZE bytes before the current address.
+ * A valid pointer points to a hdr structure. (The above can't be
+ * valid pointers due to the GET_MEM return convention.)
+ */
+ struct bi * asc_link; /* All indices are linked in */
+ /* ascending order... */
+ struct bi * desc_link; /* ... and in descending order. */
+ word key; /* high order address bits. */
+# ifdef HASH_TL
+ struct bi * hash_link; /* Hash chain link. */
+# endif
+} bottom_index;
+
+/* extern bottom_index GC_all_nils; - really part of GC_arrays */
+
+/* extern bottom_index * GC_top_index []; - really part of GC_arrays */
+ /* Each entry points to a bottom_index. */
+ /* On a 32 bit machine, it points to */
+ /* the index for a set of high order */
+ /* bits equal to the index. For longer */
+ /* addresses, we hash the high order */
+ /* bits to compute the index in */
+ /* GC_top_index, and each entry points */
+ /* to a hash chain. */
+ /* The last entry in each chain is */
+ /* GC_all_nils. */
+
+
+# define MAX_JUMP (HBLKSIZE - 1)
+
+# define HDR_FROM_BI(bi, p) \
+ ((bi)->index[((word)(p) >> LOG_HBLKSIZE) & (BOTTOM_SZ - 1)])
+# ifndef HASH_TL
+# define BI(p) (GC_top_index \
+ [(word)(p) >> (LOG_BOTTOM_SZ + LOG_HBLKSIZE)])
+# define HDR_INNER(p) HDR_FROM_BI(BI(p),p)
+# ifdef SMALL_CONFIG
+# define HDR(p) GC_find_header((ptr_t)(p))
+# else
+# define HDR(p) HDR_INNER(p)
+# endif
+# define GET_BI(p, bottom_indx) (bottom_indx) = BI(p)
+# define GET_HDR(p, hhdr) (hhdr) = HDR(p)
+# define SET_HDR(p, hhdr) HDR_INNER(p) = (hhdr)
+# define GET_HDR_ADDR(p, ha) (ha) = &(HDR_INNER(p))
+# else /* hash */
+/* Hash function for tree top level */
+# define TL_HASH(hi) ((hi) & (TOP_SZ - 1))
+/* Set bottom_indx to point to the bottom index for address p */
+# define GET_BI(p, bottom_indx) \
+ { \
+ register word hi = \
+ (word)(p) >> (LOG_BOTTOM_SZ + LOG_HBLKSIZE); \
+ register bottom_index * _bi = GC_top_index[TL_HASH(hi)]; \
+ \
+ while (_bi -> key != hi && _bi != GC_all_nils) \
+ _bi = _bi -> hash_link; \
+ (bottom_indx) = _bi; \
+ }
+# define GET_HDR_ADDR(p, ha) \
+ { \
+ register bottom_index * bi; \
+ \
+ GET_BI(p, bi); \
+ (ha) = &(HDR_FROM_BI(bi, p)); \
+ }
+# define GET_HDR(p, hhdr) { register hdr ** _ha; GET_HDR_ADDR(p, _ha); \
+ (hhdr) = *_ha; }
+# define SET_HDR(p, hhdr) { register hdr ** _ha; GET_HDR_ADDR(p, _ha); \
+ *_ha = (hhdr); }
+# define HDR(p) GC_find_header((ptr_t)(p))
+# endif
+
+/* Is the result a forwarding address to someplace closer to the */
+/* beginning of the block or NIL? */
+# define IS_FORWARDING_ADDR_OR_NIL(hhdr) ((unsigned long) (hhdr) <= MAX_JUMP)
+
+/* Get an HBLKSIZE aligned address closer to the beginning of the block */
+/* h. Assumes hhdr == HDR(h) and IS_FORWARDING_ADDR(hhdr). */
+# define FORWARDED_ADDR(h, hhdr) ((struct hblk *)(h) - (unsigned long)(hhdr))
+# endif /* GC_HEADERS_H */
Added: llvm-gcc-4.2/trunk/boehm-gc/include/private/gc_locks.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/private/gc_locks.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/private/gc_locks.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/private/gc_locks.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,692 @@
+/*
+ * Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
+ * Copyright (c) 1991-1994 by Xerox Corporation. All rights reserved.
+ * Copyright (c) 1996-1999 by Silicon Graphics. All rights reserved.
+ * Copyright (c) 1999 by Hewlett-Packard Company. All rights reserved.
+ *
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ */
+
+#ifndef GC_LOCKS_H
+#define GC_LOCKS_H
+
+/*
+ * Mutual exclusion between allocator/collector routines.
+ * Needed if there is more than one allocator thread.
+ * FASTLOCK() is assumed to try to acquire the lock in a cheap and
+ * dirty way that is acceptable for a few instructions, e.g. by
+ * inhibiting preemption. This is assumed to have succeeded only
+ * if a subsequent call to FASTLOCK_SUCCEEDED() returns TRUE.
+ * FASTUNLOCK() is called whether or not FASTLOCK_SUCCEEDED().
+ * If signals cannot be tolerated with the FASTLOCK held, then
+ * FASTLOCK should disable signals. The code executed under
+ * FASTLOCK is otherwise immune to interruption, provided it is
+ * not restarted.
+ * DCL_LOCK_STATE declares any local variables needed by LOCK and UNLOCK
+ * and/or DISABLE_SIGNALS and ENABLE_SIGNALS and/or FASTLOCK.
+ * (There is currently no equivalent for FASTLOCK.)
+ *
+ * In the PARALLEL_MARK case, we also need to define a number of
+ * other inline finctions here:
+ * GC_bool GC_compare_and_exchange( volatile GC_word *addr,
+ * GC_word old, GC_word new )
+ * GC_word GC_atomic_add( volatile GC_word *addr, GC_word how_much )
+ * void GC_memory_barrier( )
+ *
+ */
+# ifdef THREADS
+ void GC_noop1 GC_PROTO((word));
+# ifdef PCR_OBSOLETE /* Faster, but broken with multiple lwp's */
+# include "th/PCR_Th.h"
+# include "th/PCR_ThCrSec.h"
+ extern struct PCR_Th_MLRep GC_allocate_ml;
+# define DCL_LOCK_STATE PCR_sigset_t GC_old_sig_mask
+# define LOCK() PCR_Th_ML_Acquire(&GC_allocate_ml)
+# define UNLOCK() PCR_Th_ML_Release(&GC_allocate_ml)
+# define UNLOCK() PCR_Th_ML_Release(&GC_allocate_ml)
+# define FASTLOCK() PCR_ThCrSec_EnterSys()
+ /* Here we cheat (a lot): */
+# define FASTLOCK_SUCCEEDED() (*(int *)(&GC_allocate_ml) == 0)
+ /* TRUE if nobody currently holds the lock */
+# define FASTUNLOCK() PCR_ThCrSec_ExitSys()
+# endif
+# ifdef PCR
+# include <base/PCR_Base.h>
+# include <th/PCR_Th.h>
+ extern PCR_Th_ML GC_allocate_ml;
+# define DCL_LOCK_STATE \
+ PCR_ERes GC_fastLockRes; PCR_sigset_t GC_old_sig_mask
+# define LOCK() PCR_Th_ML_Acquire(&GC_allocate_ml)
+# define UNLOCK() PCR_Th_ML_Release(&GC_allocate_ml)
+# define FASTLOCK() (GC_fastLockRes = PCR_Th_ML_Try(&GC_allocate_ml))
+# define FASTLOCK_SUCCEEDED() (GC_fastLockRes == PCR_ERes_okay)
+# define FASTUNLOCK() {\
+ if( FASTLOCK_SUCCEEDED() ) PCR_Th_ML_Release(&GC_allocate_ml); }
+# endif
+# ifdef SRC_M3
+ extern GC_word RT0u__inCritical;
+# define LOCK() RT0u__inCritical++
+# define UNLOCK() RT0u__inCritical--
+# endif
+# ifdef GC_SOLARIS_THREADS
+# include <thread.h>
+# include <signal.h>
+ extern mutex_t GC_allocate_ml;
+# define LOCK() mutex_lock(&GC_allocate_ml);
+# define UNLOCK() mutex_unlock(&GC_allocate_ml);
+# endif
+
+/* Try to define GC_TEST_AND_SET and a matching GC_CLEAR for spin lock */
+/* acquisition and release. We need this for correct operation of the */
+/* incremental GC. */
+# ifdef __GNUC__
+# if defined(I386)
+ inline static int GC_test_and_set(volatile unsigned int *addr) {
+ int oldval;
+ /* Note: the "xchg" instruction does not need a "lock" prefix */
+ __asm__ __volatile__("xchgl %0, %1"
+ : "=r"(oldval), "=m"(*(addr))
+ : "0"(1), "m"(*(addr)) : "memory");
+ return oldval;
+ }
+# define GC_TEST_AND_SET_DEFINED
+# endif
+# if defined(IA64)
+# include <ia64intrin.h>
+ inline static int GC_test_and_set(volatile unsigned int *addr) {
+ return __sync_lock_test_and_set(addr, 1);
+ }
+# define GC_TEST_AND_SET_DEFINED
+ inline static void GC_clear(volatile unsigned int *addr) {
+ *addr = 0;
+ }
+# define GC_CLEAR_DEFINED
+# endif
+# ifdef SPARC
+ inline static int GC_test_and_set(volatile unsigned int *addr) {
+ int oldval;
+
+ __asm__ __volatile__("ldstub %1,%0"
+ : "=r"(oldval), "=m"(*addr)
+ : "m"(*addr) : "memory");
+ return oldval;
+ }
+# define GC_TEST_AND_SET_DEFINED
+# endif
+# ifdef M68K
+ /* Contributed by Tony Mantler. I'm not sure how well it was */
+ /* tested. */
+ inline static int GC_test_and_set(volatile unsigned int *addr) {
+ char oldval; /* this must be no longer than 8 bits */
+
+ /* The return value is semi-phony. */
+ /* 'tas' sets bit 7 while the return */
+ /* value pretends bit 0 was set */
+ __asm__ __volatile__(
+ "tas %1@; sne %0; negb %0"
+ : "=d" (oldval)
+ : "a" (addr) : "memory");
+ return oldval;
+ }
+# define GC_TEST_AND_SET_DEFINED
+# endif
+# if defined(POWERPC)
+# if 0 /* CPP_WORDSZ == 64 totally broken to use int locks with ldarx */
+ inline static int GC_test_and_set(volatile unsigned int *addr) {
+ unsigned long oldval;
+ unsigned long temp = 1; /* locked value */
+
+ __asm__ __volatile__(
+ "1:\tldarx %0,0,%3\n" /* load and reserve */
+ "\tcmpdi %0, 0\n" /* if load is */
+ "\tbne 2f\n" /* non-zero, return already set */
+ "\tstdcx. %2,0,%1\n" /* else store conditional */
+ "\tbne- 1b\n" /* retry if lost reservation */
+ "\tsync\n" /* import barrier */
+ "2:\t\n" /* oldval is zero if we set */
+ : "=&r"(oldval), "=p"(addr)
+ : "r"(temp), "1"(addr)
+ : "cr0","memory");
+ return (int)oldval;
+ }
+# else
+ inline static int GC_test_and_set(volatile unsigned int *addr) {
+ int oldval;
+ int temp = 1; /* locked value */
+
+ __asm__ __volatile__(
+ "1:\tlwarx %0,0,%3\n" /* load and reserve */
+ "\tcmpwi %0, 0\n" /* if load is */
+ "\tbne 2f\n" /* non-zero, return already set */
+ "\tstwcx. %2,0,%1\n" /* else store conditional */
+ "\tbne- 1b\n" /* retry if lost reservation */
+ "\tsync\n" /* import barrier */
+ "2:\t\n" /* oldval is zero if we set */
+ : "=&r"(oldval), "=p"(addr)
+ : "r"(temp), "1"(addr)
+ : "cr0","memory");
+ return oldval;
+ }
+# endif
+# define GC_TEST_AND_SET_DEFINED
+ inline static void GC_clear(volatile unsigned int *addr) {
+ __asm__ __volatile__("lwsync" : : : "memory");
+ *(addr) = 0;
+ }
+# define GC_CLEAR_DEFINED
+# endif
+# if defined(ALPHA)
+ inline static int GC_test_and_set(volatile unsigned int * addr)
+ {
+ unsigned long oldvalue;
+ unsigned long temp;
+
+ __asm__ __volatile__(
+ "1: ldl_l %0,%1\n"
+ " and %0,%3,%2\n"
+ " bne %2,2f\n"
+ " xor %0,%3,%0\n"
+ " stl_c %0,%1\n"
+# ifdef __ELF__
+ " beq %0,3f\n"
+# else
+ " beq %0,1b\n"
+# endif
+ " mb\n"
+ "2:\n"
+# ifdef __ELF__
+ ".section .text2,\"ax\"\n"
+ "3: br 1b\n"
+ ".previous"
+# endif
+ :"=&r" (temp), "=m" (*addr), "=&r" (oldvalue)
+ :"Ir" (1), "m" (*addr)
+ :"memory");
+
+ return oldvalue;
+ }
+# define GC_TEST_AND_SET_DEFINED
+ inline static void GC_clear(volatile unsigned int *addr) {
+ __asm__ __volatile__("mb" : : : "memory");
+ *(addr) = 0;
+ }
+# define GC_CLEAR_DEFINED
+# endif /* ALPHA */
+# ifdef ARM32
+ inline static int GC_test_and_set(volatile unsigned int *addr) {
+ int oldval;
+ /* SWP on ARM is very similar to XCHG on x86. Doesn't lock the
+ * bus because there are no SMP ARM machines. If/when there are,
+ * this code will likely need to be updated. */
+ /* See linuxthreads/sysdeps/arm/pt-machine.h in glibc-2.1 */
+ __asm__ __volatile__("swp %0, %1, [%2]"
+ : "=r"(oldval)
+ : "0"(1), "r"(addr)
+ : "memory");
+ return oldval;
+ }
+# define GC_TEST_AND_SET_DEFINED
+# endif /* ARM32 */
+# ifdef CRIS
+ inline static int GC_test_and_set(volatile unsigned int *addr) {
+ /* Ripped from linuxthreads/sysdeps/cris/pt-machine.h. */
+ /* Included with Hans-Peter Nilsson's permission. */
+ register unsigned long int ret;
+
+ /* Note the use of a dummy output of *addr to expose the write.
+ * The memory barrier is to stop *other* writes being moved past
+ * this code.
+ */
+ __asm__ __volatile__("clearf\n"
+ "0:\n\t"
+ "movu.b [%2],%0\n\t"
+ "ax\n\t"
+ "move.b %3,[%2]\n\t"
+ "bwf 0b\n\t"
+ "clearf"
+ : "=&r" (ret), "=m" (*addr)
+ : "r" (addr), "r" ((int) 1), "m" (*addr)
+ : "memory");
+ return ret;
+ }
+# define GC_TEST_AND_SET_DEFINED
+# endif /* CRIS */
+# ifdef S390
+ inline static int GC_test_and_set(volatile unsigned int *addr) {
+ int ret;
+ __asm__ __volatile__ (
+ " l %0,0(%2)\n"
+ "0: cs %0,%1,0(%2)\n"
+ " jl 0b"
+ : "=&d" (ret)
+ : "d" (1), "a" (addr)
+ : "cc", "memory");
+ return ret;
+ }
+# endif
+# endif /* __GNUC__ */
+# if (defined(ALPHA) && !defined(__GNUC__))
+# ifndef OSF1
+ --> We currently assume that if gcc is not used, we are
+ --> running under Tru64.
+# endif
+# include <machine/builtins.h>
+# include <c_asm.h>
+# define GC_test_and_set(addr) __ATOMIC_EXCH_LONG(addr, 1)
+# define GC_TEST_AND_SET_DEFINED
+# define GC_clear(addr) { asm("mb"); *(volatile unsigned *)addr = 0; }
+# define GC_CLEAR_DEFINED
+# endif
+# if defined(MSWIN32)
+# define GC_test_and_set(addr) InterlockedExchange((LPLONG)addr,1)
+# define GC_TEST_AND_SET_DEFINED
+# endif
+# ifdef MIPS
+# ifdef LINUX
+# include <sys/tas.h>
+# define GC_test_and_set(addr) _test_and_set((int *) addr,1)
+# define GC_TEST_AND_SET_DEFINED
+# elif __mips < 3 || !(defined (_ABIN32) || defined(_ABI64)) \
+ || !defined(_COMPILER_VERSION) || _COMPILER_VERSION < 700
+# ifdef __GNUC__
+# define GC_test_and_set(addr) _test_and_set((void *)addr,1)
+# else
+# define GC_test_and_set(addr) test_and_set((void *)addr,1)
+# endif
+# else
+# include <sgidefs.h>
+# include <mutex.h>
+# define GC_test_and_set(addr) __test_and_set32((void *)addr,1)
+# define GC_clear(addr) __lock_release(addr);
+# define GC_CLEAR_DEFINED
+# endif
+# define GC_TEST_AND_SET_DEFINED
+# endif /* MIPS */
+# if defined(_AIX)
+# include <sys/atomic_op.h>
+# if (defined(_POWER) || defined(_POWERPC))
+# if defined(__GNUC__)
+ inline static void GC_memsync() {
+ __asm__ __volatile__ ("sync" : : : "memory");
+ }
+# else
+# ifndef inline
+# define inline __inline
+# endif
+# pragma mc_func GC_memsync { \
+ "7c0004ac" /* sync (same opcode used for dcs)*/ \
+ }
+# endif
+# else
+# error dont know how to memsync
+# endif
+ inline static int GC_test_and_set(volatile unsigned int * addr) {
+ int oldvalue = 0;
+ if (compare_and_swap((void *)addr, &oldvalue, 1)) {
+ GC_memsync();
+ return 0;
+ } else return 1;
+ }
+# define GC_TEST_AND_SET_DEFINED
+ inline static void GC_clear(volatile unsigned int *addr) {
+ GC_memsync();
+ *(addr) = 0;
+ }
+# define GC_CLEAR_DEFINED
+
+# endif
+# if 0 /* defined(HP_PA) */
+ /* The official recommendation seems to be to not use ldcw from */
+ /* user mode. Since multithreaded incremental collection doesn't */
+ /* work anyway on HP_PA, this shouldn't be a major loss. */
+
+ /* "set" means 0 and "clear" means 1 here. */
+# define GC_test_and_set(addr) !GC_test_and_clear(addr);
+# define GC_TEST_AND_SET_DEFINED
+# define GC_clear(addr) GC_noop1((word)(addr)); *(volatile unsigned int *)addr = 1;
+ /* The above needs a memory barrier! */
+# define GC_CLEAR_DEFINED
+# endif
+# if defined(GC_TEST_AND_SET_DEFINED) && !defined(GC_CLEAR_DEFINED)
+# ifdef __GNUC__
+ inline static void GC_clear(volatile unsigned int *addr) {
+ /* Try to discourage gcc from moving anything past this. */
+ __asm__ __volatile__(" " : : : "memory");
+ *(addr) = 0;
+ }
+# else
+ /* The function call in the following should prevent the */
+ /* compiler from moving assignments to below the UNLOCK. */
+# define GC_clear(addr) GC_noop1((word)(addr)); \
+ *((volatile unsigned int *)(addr)) = 0;
+# endif
+# define GC_CLEAR_DEFINED
+# endif /* !GC_CLEAR_DEFINED */
+
+# if !defined(GC_TEST_AND_SET_DEFINED)
+# define USE_PTHREAD_LOCKS
+# endif
+
+# if defined(GC_PTHREADS) && !defined(GC_SOLARIS_THREADS) \
+ && !defined(GC_WIN32_THREADS)
+# define NO_THREAD (pthread_t)(-1)
+# include <pthread.h>
+# if defined(PARALLEL_MARK)
+ /* We need compare-and-swap to update mark bits, where it's */
+ /* performance critical. If USE_MARK_BYTES is defined, it is */
+ /* no longer needed for this purpose. However we use it in */
+ /* either case to implement atomic fetch-and-add, though that's */
+ /* less performance critical, and could perhaps be done with */
+ /* a lock. */
+# if defined(GENERIC_COMPARE_AND_SWAP)
+ /* Probably not useful, except for debugging. */
+ /* We do use GENERIC_COMPARE_AND_SWAP on PA_RISC, but we */
+ /* minimize its use. */
+ extern pthread_mutex_t GC_compare_and_swap_lock;
+
+ /* Note that if GC_word updates are not atomic, a concurrent */
+ /* reader should acquire GC_compare_and_swap_lock. On */
+ /* currently supported platforms, such updates are atomic. */
+ extern GC_bool GC_compare_and_exchange(volatile GC_word *addr,
+ GC_word old, GC_word new_val);
+# endif /* GENERIC_COMPARE_AND_SWAP */
+# if defined(I386)
+# if !defined(GENERIC_COMPARE_AND_SWAP)
+ /* Returns TRUE if the comparison succeeded. */
+ inline static GC_bool GC_compare_and_exchange(volatile GC_word *addr,
+ GC_word old,
+ GC_word new_val)
+ {
+ char result;
+ __asm__ __volatile__("lock; cmpxchgl %2, %0; setz %1"
+ : "+m"(*(addr)), "=r"(result)
+ : "r" (new_val), "a"(old) : "memory");
+ return (GC_bool) result;
+ }
+# endif /* !GENERIC_COMPARE_AND_SWAP */
+ inline static void GC_memory_barrier()
+ {
+ /* We believe the processor ensures at least processor */
+ /* consistent ordering. Thus a compiler barrier */
+ /* should suffice. */
+ __asm__ __volatile__("" : : : "memory");
+ }
+# endif /* I386 */
+
+# if defined(POWERPC)
+# if !defined(GENERIC_COMPARE_AND_SWAP)
+# if CPP_WORDSZ == 64
+ /* Returns TRUE if the comparison succeeded. */
+ inline static GC_bool GC_compare_and_exchange(volatile GC_word *addr,
+ GC_word old, GC_word new_val)
+ {
+ unsigned long result, dummy;
+ __asm__ __volatile__(
+ "1:\tldarx %0,0,%5\n"
+ "\tcmpd %0,%4\n"
+ "\tbne 2f\n"
+ "\tstdcx. %3,0,%2\n"
+ "\tbne- 1b\n"
+ "\tsync\n"
+ "\tli %1, 1\n"
+ "\tb 3f\n"
+ "2:\tli %1, 0\n"
+ "3:\t\n"
+ : "=&r" (dummy), "=r" (result), "=p" (addr)
+ : "r" (new_val), "r" (old), "2"(addr)
+ : "cr0","memory");
+ return (GC_bool) result;
+ }
+# else
+ /* Returns TRUE if the comparison succeeded. */
+ inline static GC_bool GC_compare_and_exchange(volatile GC_word *addr,
+ GC_word old, GC_word new_val)
+ {
+ int result, dummy;
+ __asm__ __volatile__(
+ "1:\tlwarx %0,0,%5\n"
+ "\tcmpw %0,%4\n"
+ "\tbne 2f\n"
+ "\tstwcx. %3,0,%2\n"
+ "\tbne- 1b\n"
+ "\tsync\n"
+ "\tli %1, 1\n"
+ "\tb 3f\n"
+ "2:\tli %1, 0\n"
+ "3:\t\n"
+ : "=&r" (dummy), "=r" (result), "=p" (addr)
+ : "r" (new_val), "r" (old), "2"(addr)
+ : "cr0","memory");
+ return (GC_bool) result;
+ }
+# endif
+# endif /* !GENERIC_COMPARE_AND_SWAP */
+ inline static void GC_memory_barrier()
+ {
+ __asm__ __volatile__("sync" : : : "memory");
+ }
+# endif /* POWERPC */
+
+# if defined(IA64)
+# if !defined(GENERIC_COMPARE_AND_SWAP)
+ inline static GC_bool GC_compare_and_exchange(volatile GC_word *addr,
+ GC_word old,
+ GC_word new_val)
+ {
+ return __sync_bool_compare_and_swap (addr, old, new_val);
+ }
+# endif /* !GENERIC_COMPARE_AND_SWAP */
+# if 0
+ /* Shouldn't be needed; we use volatile stores instead. */
+ inline static void GC_memory_barrier()
+ {
+ __sync_synchronize ();
+ }
+# endif /* 0 */
+# endif /* IA64 */
+# if defined(ALPHA)
+# if !defined(GENERIC_COMPARE_AND_SWAP)
+# if defined(__GNUC__)
+ inline static GC_bool GC_compare_and_exchange(volatile GC_word *addr,
+ GC_word old, GC_word new_val)
+ {
+ unsigned long was_equal;
+ unsigned long temp;
+
+ __asm__ __volatile__(
+ "1: ldq_l %0,%1\n"
+ " cmpeq %0,%4,%2\n"
+ " mov %3,%0\n"
+ " beq %2,2f\n"
+ " stq_c %0,%1\n"
+ " beq %0,1b\n"
+ "2:\n"
+ " mb\n"
+ :"=&r" (temp), "=m" (*addr), "=&r" (was_equal)
+ : "r" (new_val), "Ir" (old)
+ :"memory");
+ return was_equal;
+ }
+# else /* !__GNUC__ */
+ inline static GC_bool GC_compare_and_exchange(volatile GC_word *addr,
+ GC_word old, GC_word new_val)
+ {
+ return __CMP_STORE_QUAD(addr, old, new_val, addr);
+ }
+# endif /* !__GNUC__ */
+# endif /* !GENERIC_COMPARE_AND_SWAP */
+# ifdef __GNUC__
+ inline static void GC_memory_barrier()
+ {
+ __asm__ __volatile__("mb" : : : "memory");
+ }
+# else
+# define GC_memory_barrier() asm("mb")
+# endif /* !__GNUC__ */
+# endif /* ALPHA */
+# if defined(S390)
+# if !defined(GENERIC_COMPARE_AND_SWAP)
+ inline static GC_bool GC_compare_and_exchange(volatile C_word *addr,
+ GC_word old, GC_word new_val)
+ {
+ int retval;
+ __asm__ __volatile__ (
+# ifndef __s390x__
+ " cs %1,%2,0(%3)\n"
+# else
+ " csg %1,%2,0(%3)\n"
+# endif
+ " ipm %0\n"
+ " srl %0,28\n"
+ : "=&d" (retval), "+d" (old)
+ : "d" (new_val), "a" (addr)
+ : "cc", "memory");
+ return retval == 0;
+ }
+# endif
+# endif
+# if !defined(GENERIC_COMPARE_AND_SWAP)
+ /* Returns the original value of *addr. */
+ inline static GC_word GC_atomic_add(volatile GC_word *addr,
+ GC_word how_much)
+ {
+ GC_word old;
+ do {
+ old = *addr;
+ } while (!GC_compare_and_exchange(addr, old, old+how_much));
+ return old;
+ }
+# else /* GENERIC_COMPARE_AND_SWAP */
+ /* So long as a GC_word can be atomically updated, it should */
+ /* be OK to read *addr without a lock. */
+ extern GC_word GC_atomic_add(volatile GC_word *addr, GC_word how_much);
+# endif /* GENERIC_COMPARE_AND_SWAP */
+
+# endif /* PARALLEL_MARK */
+
+# if !defined(THREAD_LOCAL_ALLOC) && !defined(USE_PTHREAD_LOCKS)
+ /* In the THREAD_LOCAL_ALLOC case, the allocation lock tends to */
+ /* be held for long periods, if it is held at all. Thus spinning */
+ /* and sleeping for fixed periods are likely to result in */
+ /* significant wasted time. We thus rely mostly on queued locks. */
+# define USE_SPIN_LOCK
+ extern volatile unsigned int GC_allocate_lock;
+ extern void GC_lock(void);
+ /* Allocation lock holder. Only set if acquired by client through */
+ /* GC_call_with_alloc_lock. */
+# ifdef GC_ASSERTIONS
+# define LOCK() \
+ { if (GC_test_and_set(&GC_allocate_lock)) GC_lock(); \
+ SET_LOCK_HOLDER(); }
+# define UNLOCK() \
+ { GC_ASSERT(I_HOLD_LOCK()); UNSET_LOCK_HOLDER(); \
+ GC_clear(&GC_allocate_lock); }
+# else
+# define LOCK() \
+ { if (GC_test_and_set(&GC_allocate_lock)) GC_lock(); }
+# define UNLOCK() \
+ GC_clear(&GC_allocate_lock)
+# endif /* !GC_ASSERTIONS */
+# if 0
+ /* Another alternative for OSF1 might be: */
+# include <sys/mman.h>
+ extern msemaphore GC_allocate_semaphore;
+# define LOCK() { if (msem_lock(&GC_allocate_semaphore, MSEM_IF_NOWAIT) \
+ != 0) GC_lock(); else GC_allocate_lock = 1; }
+ /* The following is INCORRECT, since the memory model is too weak. */
+ /* Is this true? Presumably msem_unlock has the right semantics? */
+ /* - HB */
+# define UNLOCK() { GC_allocate_lock = 0; \
+ msem_unlock(&GC_allocate_semaphore, 0); }
+# endif /* 0 */
+# else /* THREAD_LOCAL_ALLOC || USE_PTHREAD_LOCKS */
+# ifndef USE_PTHREAD_LOCKS
+# define USE_PTHREAD_LOCKS
+# endif
+# endif /* THREAD_LOCAL_ALLOC */
+# ifdef USE_PTHREAD_LOCKS
+# include <pthread.h>
+ extern pthread_mutex_t GC_allocate_ml;
+# ifdef GC_ASSERTIONS
+# define LOCK() \
+ { GC_lock(); \
+ SET_LOCK_HOLDER(); }
+# define UNLOCK() \
+ { GC_ASSERT(I_HOLD_LOCK()); UNSET_LOCK_HOLDER(); \
+ pthread_mutex_unlock(&GC_allocate_ml); }
+# else /* !GC_ASSERTIONS */
+# if defined(NO_PTHREAD_TRYLOCK)
+# define LOCK() GC_lock();
+# else /* !defined(NO_PTHREAD_TRYLOCK) */
+# define LOCK() \
+ { if (0 != pthread_mutex_trylock(&GC_allocate_ml)) GC_lock(); }
+# endif
+# define UNLOCK() pthread_mutex_unlock(&GC_allocate_ml)
+# endif /* !GC_ASSERTIONS */
+# endif /* USE_PTHREAD_LOCKS */
+# define SET_LOCK_HOLDER() GC_lock_holder = pthread_self()
+# define UNSET_LOCK_HOLDER() GC_lock_holder = NO_THREAD
+# define I_HOLD_LOCK() (pthread_equal(GC_lock_holder, pthread_self()))
+ extern VOLATILE GC_bool GC_collecting;
+# define ENTER_GC() GC_collecting = 1;
+# define EXIT_GC() GC_collecting = 0;
+ extern void GC_lock(void);
+ extern pthread_t GC_lock_holder;
+# ifdef GC_ASSERTIONS
+ extern pthread_t GC_mark_lock_holder;
+# endif
+# endif /* GC_PTHREADS with linux_threads.c implementation */
+# if defined(GC_WIN32_THREADS)
+# if defined(GC_PTHREADS)
+# include <pthread.h>
+ extern pthread_mutex_t GC_allocate_ml;
+# define LOCK() pthread_mutex_lock(&GC_allocate_ml)
+# define UNLOCK() pthread_mutex_unlock(&GC_allocate_ml)
+# else
+# include <windows.h>
+ GC_API CRITICAL_SECTION GC_allocate_ml;
+# define LOCK() EnterCriticalSection(&GC_allocate_ml);
+# define UNLOCK() LeaveCriticalSection(&GC_allocate_ml);
+# endif
+# endif
+# ifndef SET_LOCK_HOLDER
+# define SET_LOCK_HOLDER()
+# define UNSET_LOCK_HOLDER()
+# define I_HOLD_LOCK() FALSE
+ /* Used on platforms were locks can be reacquired, */
+ /* so it doesn't matter if we lie. */
+# endif
+# else /* !THREADS */
+# define LOCK()
+# define UNLOCK()
+# endif /* !THREADS */
+# ifndef SET_LOCK_HOLDER
+# define SET_LOCK_HOLDER()
+# define UNSET_LOCK_HOLDER()
+# define I_HOLD_LOCK() FALSE
+ /* Used on platforms were locks can be reacquired, */
+ /* so it doesn't matter if we lie. */
+# endif
+# ifndef ENTER_GC
+# define ENTER_GC()
+# define EXIT_GC()
+# endif
+
+# ifndef DCL_LOCK_STATE
+# define DCL_LOCK_STATE
+# endif
+# ifndef FASTLOCK
+# define FASTLOCK() LOCK()
+# define FASTLOCK_SUCCEEDED() TRUE
+# define FASTUNLOCK() UNLOCK()
+# endif
+
+#endif /* GC_LOCKS_H */
Added: llvm-gcc-4.2/trunk/boehm-gc/include/private/gc_pmark.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/private/gc_pmark.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/private/gc_pmark.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/private/gc_pmark.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,394 @@
+/*
+ * Copyright (c) 1991-1994 by Xerox Corporation. All rights reserved.
+ * Copyright (c) 2001 by Hewlett-Packard Company. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ *
+ */
+
+/* Private declarations of GC marker data structures and macros */
+
+/*
+ * Declarations of mark stack. Needed by marker and client supplied mark
+ * routines. Transitively include gc_priv.h.
+ * (Note that gc_priv.h should not be included before this, since this
+ * includes dbg_mlc.h, which wants to include gc_priv.h AFTER defining
+ * I_HIDE_POINTERS.)
+ */
+#ifndef GC_PMARK_H
+# define GC_PMARK_H
+
+# include "gc.h" /* For configuration */
+
+# if defined(KEEP_BACK_PTRS) || defined(PRINT_BLACK_LIST)
+# include "dbg_mlc.h"
+# endif
+# ifndef GC_MARK_H
+# include "../gc_mark.h"
+# endif
+# ifndef GC_PRIVATE_H
+# include "gc_priv.h"
+# endif
+
+/* The real declarations of the following is in gc_priv.h, so that */
+/* we can avoid scanning the following table. */
+/*
+extern mark_proc GC_mark_procs[MAX_MARK_PROCS];
+*/
+
+/*
+ * Mark descriptor stuff that should remain private for now, mostly
+ * because it's hard to export WORDSZ without including gcconfig.h.
+ */
+# define BITMAP_BITS (WORDSZ - GC_DS_TAG_BITS)
+# define PROC(descr) \
+ (GC_mark_procs[((descr) >> GC_DS_TAG_BITS) & (GC_MAX_MARK_PROCS-1)])
+# define ENV(descr) \
+ ((descr) >> (GC_DS_TAG_BITS + GC_LOG_MAX_MARK_PROCS))
+# define MAX_ENV \
+ (((word)1 << (WORDSZ - GC_DS_TAG_BITS - GC_LOG_MAX_MARK_PROCS)) - 1)
+
+
+extern word GC_n_mark_procs;
+
+/* Number of mark stack entries to discard on overflow. */
+#define GC_MARK_STACK_DISCARDS (INITIAL_MARK_STACK_SIZE/8)
+
+typedef struct GC_ms_entry {
+ GC_word * mse_start; /* First word of object */
+ GC_word mse_descr; /* Descriptor; low order two bits are tags, */
+ /* identifying the upper 30 bits as one of the */
+ /* following: */
+} mse;
+
+extern word GC_mark_stack_size;
+
+extern mse * GC_mark_stack_limit;
+
+#ifdef PARALLEL_MARK
+ extern mse * VOLATILE GC_mark_stack_top;
+#else
+ extern mse * GC_mark_stack_top;
+#endif
+
+extern mse * GC_mark_stack;
+
+#ifdef PARALLEL_MARK
+ /*
+ * Allow multiple threads to participate in the marking process.
+ * This works roughly as follows:
+ * The main mark stack never shrinks, but it can grow.
+ *
+ * The initiating threads holds the GC lock, and sets GC_help_wanted.
+ *
+ * Other threads:
+ * 1) update helper_count (while holding mark_lock.)
+ * 2) allocate a local mark stack
+ * repeatedly:
+ * 3) Steal a global mark stack entry by atomically replacing
+ * its descriptor with 0.
+ * 4) Copy it to the local stack.
+ * 5) Mark on the local stack until it is empty, or
+ * it may be profitable to copy it back.
+ * 6) If necessary, copy local stack to global one,
+ * holding mark lock.
+ * 7) Stop when the global mark stack is empty.
+ * 8) decrement helper_count (holding mark_lock).
+ *
+ * This is an experiment to see if we can do something along the lines
+ * of the University of Tokyo SGC in a less intrusive, though probably
+ * also less performant, way.
+ */
+ void GC_do_parallel_mark();
+ /* inititate parallel marking. */
+
+ extern GC_bool GC_help_wanted; /* Protected by mark lock */
+ extern unsigned GC_helper_count; /* Number of running helpers. */
+ /* Protected by mark lock */
+ extern unsigned GC_active_count; /* Number of active helpers. */
+ /* Protected by mark lock */
+ /* May increase and decrease */
+ /* within each mark cycle. But */
+ /* once it returns to 0, it */
+ /* stays zero for the cycle. */
+ /* GC_mark_stack_top is also protected by mark lock. */
+ extern mse * VOLATILE GC_first_nonempty;
+ /* Lowest entry on mark stack */
+ /* that may be nonempty. */
+ /* Updated only by initiating */
+ /* thread. */
+ /*
+ * GC_notify_all_marker() is used when GC_help_wanted is first set,
+ * when the last helper becomes inactive,
+ * when something is added to the global mark stack, and just after
+ * GC_mark_no is incremented.
+ * This could be split into multiple CVs (and probably should be to
+ * scale to really large numbers of processors.)
+ */
+#endif /* PARALLEL_MARK */
+
+/* Return a pointer to within 1st page of object. */
+/* Set *new_hdr_p to corr. hdr. */
+#ifdef __STDC__
+ ptr_t GC_find_start(ptr_t current, hdr *hhdr, hdr **new_hdr_p);
+#else
+ ptr_t GC_find_start();
+#endif
+
+mse * GC_signal_mark_stack_overflow GC_PROTO((mse *msp));
+
+# ifdef GATHERSTATS
+# define ADD_TO_ATOMIC(sz) GC_atomic_in_use += (sz)
+# define ADD_TO_COMPOSITE(sz) GC_composite_in_use += (sz)
+# else
+# define ADD_TO_ATOMIC(sz)
+# define ADD_TO_COMPOSITE(sz)
+# endif
+
+/* Push the object obj with corresponding heap block header hhdr onto */
+/* the mark stack. */
+# define PUSH_OBJ(obj, hhdr, mark_stack_top, mark_stack_limit) \
+{ \
+ register word _descr = (hhdr) -> hb_descr; \
+ \
+ if (_descr == 0) { \
+ ADD_TO_ATOMIC((hhdr) -> hb_sz); \
+ } else { \
+ ADD_TO_COMPOSITE((hhdr) -> hb_sz); \
+ mark_stack_top++; \
+ if (mark_stack_top >= mark_stack_limit) { \
+ mark_stack_top = GC_signal_mark_stack_overflow(mark_stack_top); \
+ } \
+ mark_stack_top -> mse_start = (obj); \
+ mark_stack_top -> mse_descr = _descr; \
+ } \
+}
+
+/* Push the contents of current onto the mark stack if it is a valid */
+/* ptr to a currently unmarked object. Mark it. */
+/* If we assumed a standard-conforming compiler, we could probably */
+/* generate the exit_label transparently. */
+# define PUSH_CONTENTS(current, mark_stack_top, mark_stack_limit, \
+ source, exit_label) \
+{ \
+ hdr * my_hhdr; \
+ ptr_t my_current = current; \
+ \
+ GET_HDR(my_current, my_hhdr); \
+ if (IS_FORWARDING_ADDR_OR_NIL(my_hhdr)) { \
+ hdr * new_hdr = GC_invalid_header; \
+ my_current = GC_find_start(my_current, my_hhdr, &new_hdr); \
+ my_hhdr = new_hdr; \
+ } \
+ PUSH_CONTENTS_HDR(my_current, mark_stack_top, mark_stack_limit, \
+ source, exit_label, my_hhdr); \
+exit_label: ; \
+}
+
+/* As above, but use header cache for header lookup. */
+# define HC_PUSH_CONTENTS(current, mark_stack_top, mark_stack_limit, \
+ source, exit_label) \
+{ \
+ hdr * my_hhdr; \
+ ptr_t my_current = current; \
+ \
+ HC_GET_HDR(my_current, my_hhdr, source); \
+ PUSH_CONTENTS_HDR(my_current, mark_stack_top, mark_stack_limit, \
+ source, exit_label, my_hhdr); \
+exit_label: ; \
+}
+
+/* Set mark bit, exit if it was already set. */
+
+# ifdef USE_MARK_BYTES
+ /* Unlike the mark bit case, there is a race here, and we may set */
+ /* the bit twice in the concurrent case. This can result in the */
+ /* object being pushed twice. But that's only a performance issue. */
+# define SET_MARK_BIT_EXIT_IF_SET(hhdr,displ,exit_label) \
+ { \
+ register VOLATILE char * mark_byte_addr = \
+ hhdr -> hb_marks + ((displ) >> 1); \
+ register char mark_byte = *mark_byte_addr; \
+ \
+ if (mark_byte) goto exit_label; \
+ *mark_byte_addr = 1; \
+ }
+# else
+# define SET_MARK_BIT_EXIT_IF_SET(hhdr,displ,exit_label) \
+ { \
+ register word * mark_word_addr = hhdr -> hb_marks + divWORDSZ(displ); \
+ \
+ OR_WORD_EXIT_IF_SET(mark_word_addr, (word)1 << modWORDSZ(displ), \
+ exit_label); \
+ }
+# endif /* USE_MARK_BYTES */
+
+/* If the mark bit corresponding to current is not set, set it, and */
+/* push the contents of the object on the mark stack. For a small */
+/* object we assume that current is the (possibly interior) pointer */
+/* to the object. For large objects we assume that current points */
+/* to somewhere inside the first page of the object. If */
+/* GC_all_interior_pointers is set, it may have been previously */
+/* adjusted to make that true. */
+# define PUSH_CONTENTS_HDR(current, mark_stack_top, mark_stack_limit, \
+ source, exit_label, hhdr) \
+{ \
+ int displ; /* Displacement in block; first bytes, then words */ \
+ int map_entry; \
+ \
+ displ = HBLKDISPL(current); \
+ map_entry = MAP_ENTRY((hhdr -> hb_map), displ); \
+ displ = BYTES_TO_WORDS(displ); \
+ if (map_entry > CPP_MAX_OFFSET) { \
+ if (map_entry == OFFSET_TOO_BIG) { \
+ map_entry = displ % (hhdr -> hb_sz); \
+ displ -= map_entry; \
+ if (displ + (hhdr -> hb_sz) > BYTES_TO_WORDS(HBLKSIZE)) { \
+ GC_ADD_TO_BLACK_LIST_NORMAL((word)current, source); \
+ goto exit_label; \
+ } \
+ } else { \
+ GC_ADD_TO_BLACK_LIST_NORMAL((word)current, source); goto exit_label; \
+ } \
+ } else { \
+ displ -= map_entry; \
+ } \
+ GC_ASSERT(displ >= 0 && displ < MARK_BITS_PER_HBLK); \
+ SET_MARK_BIT_EXIT_IF_SET(hhdr, displ, exit_label); \
+ GC_STORE_BACK_PTR((ptr_t)source, (ptr_t)HBLKPTR(current) \
+ + WORDS_TO_BYTES(displ)); \
+ PUSH_OBJ(((word *)(HBLKPTR(current)) + displ), hhdr, \
+ mark_stack_top, mark_stack_limit) \
+}
+
+#if defined(PRINT_BLACK_LIST) || defined(KEEP_BACK_PTRS)
+# define PUSH_ONE_CHECKED_STACK(p, source) \
+ GC_mark_and_push_stack(p, (ptr_t)(source))
+#else
+# define PUSH_ONE_CHECKED_STACK(p, source) \
+ GC_mark_and_push_stack(p)
+#endif
+
+/*
+ * Push a single value onto mark stack. Mark from the object pointed to by p.
+ * Invoke FIXUP_POINTER(p) before any further processing.
+ * P is considered valid even if it is an interior pointer.
+ * Previously marked objects are not pushed. Hence we make progress even
+ * if the mark stack overflows.
+ */
+
+# if NEED_FIXUP_POINTER
+ /* Try both the raw version and the fixed up one. */
+# define GC_PUSH_ONE_STACK(p, source) \
+ if ((ptr_t)(p) >= (ptr_t)GC_least_plausible_heap_addr \
+ && (ptr_t)(p) < (ptr_t)GC_greatest_plausible_heap_addr) { \
+ PUSH_ONE_CHECKED_STACK(p, source); \
+ } \
+ FIXUP_POINTER(p); \
+ if ((ptr_t)(p) >= (ptr_t)GC_least_plausible_heap_addr \
+ && (ptr_t)(p) < (ptr_t)GC_greatest_plausible_heap_addr) { \
+ PUSH_ONE_CHECKED_STACK(p, source); \
+ }
+# else /* !NEED_FIXUP_POINTER */
+# define GC_PUSH_ONE_STACK(p, source) \
+ if ((ptr_t)(p) >= (ptr_t)GC_least_plausible_heap_addr \
+ && (ptr_t)(p) < (ptr_t)GC_greatest_plausible_heap_addr) { \
+ PUSH_ONE_CHECKED_STACK(p, source); \
+ }
+# endif
+
+
+/*
+ * As above, but interior pointer recognition as for
+ * normal for heap pointers.
+ */
+# define GC_PUSH_ONE_HEAP(p,source) \
+ FIXUP_POINTER(p); \
+ if ((ptr_t)(p) >= (ptr_t)GC_least_plausible_heap_addr \
+ && (ptr_t)(p) < (ptr_t)GC_greatest_plausible_heap_addr) { \
+ GC_mark_stack_top = GC_mark_and_push( \
+ (GC_PTR)(p), GC_mark_stack_top, \
+ GC_mark_stack_limit, (GC_PTR *)(source)); \
+ }
+
+/* Mark starting at mark stack entry top (incl.) down to */
+/* mark stack entry bottom (incl.). Stop after performing */
+/* about one page worth of work. Return the new mark stack */
+/* top entry. */
+mse * GC_mark_from GC_PROTO((mse * top, mse * bottom, mse *limit));
+
+#define MARK_FROM_MARK_STACK() \
+ GC_mark_stack_top = GC_mark_from(GC_mark_stack_top, \
+ GC_mark_stack, \
+ GC_mark_stack + GC_mark_stack_size);
+
+/*
+ * Mark from one finalizable object using the specified
+ * mark proc. May not mark the object pointed to by
+ * real_ptr. That is the job of the caller, if appropriate
+ */
+# define GC_MARK_FO(real_ptr, mark_proc) \
+{ \
+ (*(mark_proc))(real_ptr); \
+ while (!GC_mark_stack_empty()) MARK_FROM_MARK_STACK(); \
+ if (GC_mark_state != MS_NONE) { \
+ GC_set_mark_bit(real_ptr); \
+ while (!GC_mark_some((ptr_t)0)) {} \
+ } \
+}
+
+extern GC_bool GC_mark_stack_too_small;
+ /* We need a larger mark stack. May be */
+ /* set by client supplied mark routines.*/
+
+typedef int mark_state_t; /* Current state of marking, as follows:*/
+ /* Used to remember where we are during */
+ /* concurrent marking. */
+
+ /* We say something is dirty if it was */
+ /* written since the last time we */
+ /* retrieved dirty bits. We say it's */
+ /* grungy if it was marked dirty in the */
+ /* last set of bits we retrieved. */
+
+ /* Invariant I: all roots and marked */
+ /* objects p are either dirty, or point */
+ /* to objects q that are either marked */
+ /* or a pointer to q appears in a range */
+ /* on the mark stack. */
+
+# define MS_NONE 0 /* No marking in progress. I holds. */
+ /* Mark stack is empty. */
+
+# define MS_PUSH_RESCUERS 1 /* Rescuing objects are currently */
+ /* being pushed. I holds, except */
+ /* that grungy roots may point to */
+ /* unmarked objects, as may marked */
+ /* grungy objects above scan_ptr. */
+
+# define MS_PUSH_UNCOLLECTABLE 2
+ /* I holds, except that marked */
+ /* uncollectable objects above scan_ptr */
+ /* may point to unmarked objects. */
+ /* Roots may point to unmarked objects */
+
+# define MS_ROOTS_PUSHED 3 /* I holds, mark stack may be nonempty */
+
+# define MS_PARTIALLY_INVALID 4 /* I may not hold, e.g. because of M.S. */
+ /* overflow. However marked heap */
+ /* objects below scan_ptr point to */
+ /* marked or stacked objects. */
+
+# define MS_INVALID 5 /* I may not hold. */
+
+extern mark_state_t GC_mark_state;
+
+#endif /* GC_PMARK_H */
+
Added: llvm-gcc-4.2/trunk/boehm-gc/include/private/gc_priv.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/private/gc_priv.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/private/gc_priv.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/private/gc_priv.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,2058 @@
+/*
+ * Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
+ * Copyright (c) 1991-1994 by Xerox Corporation. All rights reserved.
+ * Copyright (c) 1996-1999 by Silicon Graphics. All rights reserved.
+ * Copyright (c) 1999-2001 by Hewlett-Packard Company. All rights reserved.
+ *
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ */
+
+
+# ifndef GC_PRIVATE_H
+# define GC_PRIVATE_H
+
+/* Autoconf definitions. */
+/* FIXME: This should really be included directly from each .c file. */
+#include <gc_config.h>
+
+#if defined(mips) && defined(SYSTYPE_BSD) && defined(sony_news)
+ /* sony RISC NEWS, NEWSOS 4 */
+# define BSD_TIME
+/* typedef long ptrdiff_t; -- necessary on some really old systems */
+#endif
+
+#if defined(mips) && defined(SYSTYPE_BSD43)
+ /* MIPS RISCOS 4 */
+# define BSD_TIME
+#endif
+
+#ifdef DGUX
+# include <sys/types.h>
+# include <sys/time.h>
+# include <sys/resource.h>
+#endif /* DGUX */
+
+#ifdef BSD_TIME
+# include <sys/types.h>
+# include <sys/time.h>
+# include <sys/resource.h>
+#endif /* BSD_TIME */
+
+# ifndef _GC_H
+# include "../gc.h"
+# endif
+
+# ifndef GC_MARK_H
+# include "../gc_mark.h"
+# endif
+
+typedef GC_word word;
+typedef GC_signed_word signed_word;
+
+typedef int GC_bool;
+# define TRUE 1
+# define FALSE 0
+
+typedef char * ptr_t; /* A generic pointer to which we can add */
+ /* byte displacements. */
+ /* Preferably identical to caddr_t, if it */
+ /* exists. */
+
+# ifndef GCCONFIG_H
+# include "gcconfig.h"
+# endif
+
+# ifndef HEADERS_H
+# include "gc_hdrs.h"
+# endif
+
+#if defined(__STDC__)
+# include <stdlib.h>
+# if !(defined( sony_news ) )
+# include <stddef.h>
+# endif
+# define VOLATILE volatile
+#else
+# ifdef MSWIN32
+# include <stdlib.h>
+# endif
+# define VOLATILE
+#endif
+
+#if 0 /* defined(__GNUC__) doesn't work yet */
+# define EXPECT(expr, outcome) __builtin_expect(expr,outcome)
+ /* Equivalent to (expr), but predict that usually (expr)==outcome. */
+#else
+# define EXPECT(expr, outcome) (expr)
+#endif /* __GNUC__ */
+
+# ifndef GC_LOCKS_H
+# include "gc_locks.h"
+# endif
+
+# ifdef STACK_GROWS_DOWN
+# define COOLER_THAN >
+# define HOTTER_THAN <
+# define MAKE_COOLER(x,y) if ((word)(x)+(y) > (word)(x)) {(x) += (y);} \
+ else {(x) = (word)ONES;}
+# define MAKE_HOTTER(x,y) (x) -= (y)
+# else
+# define COOLER_THAN <
+# define HOTTER_THAN >
+# define MAKE_COOLER(x,y) if ((word)(x)-(y) < (word)(x)) {(x) -= (y);} else {(x) = 0;}
+# define MAKE_HOTTER(x,y) (x) += (y)
+# endif
+
+#if defined(AMIGA) && defined(__SASC)
+# define GC_FAR __far
+#else
+# define GC_FAR
+#endif
+
+
+/*********************************/
+/* */
+/* Definitions for conservative */
+/* collector */
+/* */
+/*********************************/
+
+/*********************************/
+/* */
+/* Easily changeable parameters */
+/* */
+/*********************************/
+
+/* #define STUBBORN_ALLOC */
+ /* Enable stubborm allocation, and thus a limited */
+ /* form of incremental collection w/o dirty bits. */
+
+/* #define ALL_INTERIOR_POINTERS */
+ /* Forces all pointers into the interior of an */
+ /* object to be considered valid. Also causes the */
+ /* sizes of all objects to be inflated by at least */
+ /* one byte. This should suffice to guarantee */
+ /* that in the presence of a compiler that does */
+ /* not perform garbage-collector-unsafe */
+ /* optimizations, all portable, strictly ANSI */
+ /* conforming C programs should be safely usable */
+ /* with malloc replaced by GC_malloc and free */
+ /* calls removed. There are several disadvantages: */
+ /* 1. There are probably no interesting, portable, */
+ /* strictly ANSI conforming C programs. */
+ /* 2. This option makes it hard for the collector */
+ /* to allocate space that is not ``pointed to'' */
+ /* by integers, etc. Under SunOS 4.X with a */
+ /* statically linked libc, we empiricaly */
+ /* observed that it would be difficult to */
+ /* allocate individual objects larger than 100K. */
+ /* Even if only smaller objects are allocated, */
+ /* more swap space is likely to be needed. */
+ /* Fortunately, much of this will never be */
+ /* touched. */
+ /* If you can easily avoid using this option, do. */
+ /* If not, try to keep individual objects small. */
+ /* This is now really controlled at startup, */
+ /* through GC_all_interior_pointers. */
+
+#define PRINTSTATS /* Print garbage collection statistics */
+ /* For less verbose output, undefine in reclaim.c */
+
+#define PRINTTIMES /* Print the amount of time consumed by each garbage */
+ /* collection. */
+
+#define PRINTBLOCKS /* Print object sizes associated with heap blocks, */
+ /* whether the objects are atomic or composite, and */
+ /* whether or not the block was found to be empty */
+ /* during the reclaim phase. Typically generates */
+ /* about one screenful per garbage collection. */
+#undef PRINTBLOCKS
+
+#ifdef SILENT
+# ifdef PRINTSTATS
+# undef PRINTSTATS
+# endif
+# ifdef PRINTTIMES
+# undef PRINTTIMES
+# endif
+# ifdef PRINTNBLOCKS
+# undef PRINTNBLOCKS
+# endif
+#endif
+
+#if defined(PRINTSTATS) && !defined(GATHERSTATS)
+# define GATHERSTATS
+#endif
+
+#if defined(PRINTSTATS) || !defined(SMALL_CONFIG)
+# define CONDPRINT /* Print some things if GC_print_stats is set */
+#endif
+
+#define GC_INVOKE_FINALIZERS() GC_notify_or_invoke_finalizers()
+
+#define MERGE_SIZES /* Round up some object sizes, so that fewer distinct */
+ /* free lists are actually maintained. This applies */
+ /* only to the top level routines in misc.c, not to */
+ /* user generated code that calls GC_allocobj and */
+ /* GC_allocaobj directly. */
+ /* Slows down average programs slightly. May however */
+ /* substantially reduce fragmentation if allocation */
+ /* request sizes are widely scattered. */
+ /* May save significant amounts of space for obj_map */
+ /* entries. */
+
+#if defined(USE_MARK_BYTES) && !defined(ALIGN_DOUBLE)
+# define ALIGN_DOUBLE
+ /* We use one byte for every 2 words, which doesn't allow for */
+ /* odd numbered words to have mark bits. */
+#endif
+
+#if defined(GC_GCJ_SUPPORT) && ALIGNMENT < 8 && !defined(ALIGN_DOUBLE)
+ /* GCJ's Hashtable synchronization code requires 64-bit alignment. */
+# define ALIGN_DOUBLE
+#endif
+
+/* ALIGN_DOUBLE requires MERGE_SIZES at present. */
+# if defined(ALIGN_DOUBLE) && !defined(MERGE_SIZES)
+# define MERGE_SIZES
+# endif
+
+#if !defined(DONT_ADD_BYTE_AT_END)
+# define EXTRA_BYTES GC_all_interior_pointers
+#else
+# define EXTRA_BYTES 0
+#endif
+
+
+# ifndef LARGE_CONFIG
+# define MINHINCR 16 /* Minimum heap increment, in blocks of HBLKSIZE */
+ /* Must be multiple of largest page size. */
+# define MAXHINCR 2048 /* Maximum heap increment, in blocks */
+# else
+# define MINHINCR 64
+# define MAXHINCR 4096
+# endif
+
+# define TIME_LIMIT 50 /* We try to keep pause times from exceeding */
+ /* this by much. In milliseconds. */
+
+# define BL_LIMIT GC_black_list_spacing
+ /* If we need a block of N bytes, and we have */
+ /* a block of N + BL_LIMIT bytes available, */
+ /* and N > BL_LIMIT, */
+ /* but all possible positions in it are */
+ /* blacklisted, we just use it anyway (and */
+ /* print a warning, if warnings are enabled). */
+ /* This risks subsequently leaking the block */
+ /* due to a false reference. But not using */
+ /* the block risks unreasonable immediate */
+ /* heap growth. */
+
+/*********************************/
+/* */
+/* Stack saving for debugging */
+/* */
+/*********************************/
+
+#ifdef NEED_CALLINFO
+ struct callinfo {
+ word ci_pc; /* Caller, not callee, pc */
+# if NARGS > 0
+ word ci_arg[NARGS]; /* bit-wise complement to avoid retention */
+# endif
+# if defined(ALIGN_DOUBLE) && (NFRAMES * (NARGS + 1)) % 2 == 1
+ /* Likely alignment problem. */
+ word ci_dummy;
+# endif
+ };
+#endif
+
+#ifdef SAVE_CALL_CHAIN
+
+/* Fill in the pc and argument information for up to NFRAMES of my */
+/* callers. Ignore my frame and my callers frame. */
+void GC_save_callers GC_PROTO((struct callinfo info[NFRAMES]));
+
+void GC_print_callers GC_PROTO((struct callinfo info[NFRAMES]));
+
+#endif
+
+
+#if defined(DARWIN)
+# if defined(POWERPC)
+# if CPP_WORDSZ == 32
+# define GC_THREAD_STATE_T ppc_thread_state_t
+# define GC_MACH_THREAD_STATE PPC_THREAD_STATE
+# define GC_MACH_THREAD_STATE_COUNT PPC_THREAD_STATE_COUNT
+# define GC_MACH_HEADER mach_header
+# define GC_MACH_SECTION section
+# else
+# define GC_THREAD_STATE_T ppc_thread_state64_t
+# define GC_MACH_THREAD_STATE PPC_THREAD_STATE64
+# define GC_MACH_THREAD_STATE_COUNT PPC_THREAD_STATE64_COUNT
+# define GC_MACH_HEADER mach_header_64
+# define GC_MACH_SECTION section_64
+# endif
+# elif defined(I386) || defined(X86_64)
+# if CPP_WORDSZ == 32
+# define GC_THREAD_STATE_T x86_thread_state32_t
+# define GC_MACH_THREAD_STATE x86_THREAD_STATE32
+# define GC_MACH_THREAD_STATE_COUNT x86_THREAD_STATE32_COUNT
+# define GC_MACH_HEADER mach_header
+# define GC_MACH_SECTION section
+# else
+# define GC_THREAD_STATE_T x86_thread_state64_t
+# define GC_MACH_THREAD_STATE x86_THREAD_STATE64
+# define GC_MACH_THREAD_STATE_COUNT x86_THREAD_STATE64_COUNT
+# define GC_MACH_HEADER mach_header_64
+# define GC_MACH_SECTION section_64
+# endif
+# else
+# error define GC_THREAD_STATE_T
+# define GC_MACH_THREAD_STATE MACHINE_THREAD_STATE
+# define GC_MACH_THREAD_STATE_COUNT MACHINE_THREAD_STATE_COUNT
+# endif
+/* Try to work out the right way to access thread state structure members.
+ The structure has changed its definition in different Darwin versions.
+ This now defaults to the (older) names without __, thus hopefully,
+ not breaking any existing Makefile.direct builds. */
+# if defined (HAS_PPC_THREAD_STATE___R0) \
+ || defined (HAS_PPC_THREAD_STATE64___R0) \
+ || defined (HAS_X86_THREAD_STATE32___EAX) \
+ || defined (HAS_X86_THREAD_STATE64___RAX)
+# define THREAD_FLD(x) __ ## x
+# else
+# define THREAD_FLD(x) x
+# endif
+#endif
+/*********************************/
+/* */
+/* OS interface routines */
+/* */
+/*********************************/
+
+#ifdef BSD_TIME
+# undef CLOCK_TYPE
+# undef GET_TIME
+# undef MS_TIME_DIFF
+# define CLOCK_TYPE struct timeval
+# define GET_TIME(x) { struct rusage rusage; \
+ getrusage (RUSAGE_SELF, &rusage); \
+ x = rusage.ru_utime; }
+# define MS_TIME_DIFF(a,b) ((double) (a.tv_sec - b.tv_sec) * 1000.0 \
+ + (double) (a.tv_usec - b.tv_usec) / 1000.0)
+#else /* !BSD_TIME */
+# if defined(MSWIN32) || defined(MSWINCE)
+# include <windows.h>
+# include <winbase.h>
+# define CLOCK_TYPE DWORD
+# define GET_TIME(x) x = GetTickCount()
+# define MS_TIME_DIFF(a,b) ((long)((a)-(b)))
+# else /* !MSWIN32, !MSWINCE, !BSD_TIME */
+# include <time.h>
+# if !defined(__STDC__) && defined(SPARC) && defined(SUNOS4)
+ clock_t clock(); /* Not in time.h, where it belongs */
+# endif
+# if defined(FREEBSD) && !defined(CLOCKS_PER_SEC)
+# include <machine/limits.h>
+# define CLOCKS_PER_SEC CLK_TCK
+# endif
+# if !defined(CLOCKS_PER_SEC)
+# define CLOCKS_PER_SEC 1000000
+/*
+ * This is technically a bug in the implementation. ANSI requires that
+ * CLOCKS_PER_SEC be defined. But at least under SunOS4.1.1, it isn't.
+ * Also note that the combination of ANSI C and POSIX is incredibly gross
+ * here. The type clock_t is used by both clock() and times(). But on
+ * some machines these use different notions of a clock tick, CLOCKS_PER_SEC
+ * seems to apply only to clock. Hence we use it here. On many machines,
+ * including SunOS, clock actually uses units of microseconds (which are
+ * not really clock ticks).
+ */
+# endif
+# define CLOCK_TYPE clock_t
+# define GET_TIME(x) x = clock()
+# define MS_TIME_DIFF(a,b) ((unsigned long) \
+ (1000.0*(double)((a)-(b))/(double)CLOCKS_PER_SEC))
+# endif /* !MSWIN32 */
+#endif /* !BSD_TIME */
+
+/* We use bzero and bcopy internally. They may not be available. */
+# if defined(SPARC) && defined(SUNOS4)
+# define BCOPY_EXISTS
+# endif
+# if defined(M68K) && defined(AMIGA)
+# define BCOPY_EXISTS
+# endif
+# if defined(M68K) && defined(NEXT)
+# define BCOPY_EXISTS
+# endif
+# if defined(VAX)
+# define BCOPY_EXISTS
+# endif
+# if defined(AMIGA)
+# include <string.h>
+# define BCOPY_EXISTS
+# endif
+# if defined(DARWIN)
+# include <string.h>
+# define BCOPY_EXISTS
+# endif
+
+# ifndef BCOPY_EXISTS
+# include <string.h>
+# define BCOPY(x,y,n) memcpy(y, x, (size_t)(n))
+# define BZERO(x,n) memset(x, 0, (size_t)(n))
+# else
+# define BCOPY(x,y,n) bcopy((char *)(x),(char *)(y),(int)(n))
+# define BZERO(x,n) bzero((char *)(x),(int)(n))
+# endif
+
+/* Delay any interrupts or signals that may abort this thread. Data */
+/* structures are in a consistent state outside this pair of calls. */
+/* ANSI C allows both to be empty (though the standard isn't very */
+/* clear on that point). Standard malloc implementations are usually */
+/* neither interruptable nor thread-safe, and thus correspond to */
+/* empty definitions. */
+/* It probably doesn't make any sense to declare these to be nonempty */
+/* if the code is being optimized, since signal safety relies on some */
+/* ordering constraints that are typically not obeyed by optimizing */
+/* compilers. */
+# ifdef PCR
+# define DISABLE_SIGNALS() \
+ PCR_Th_SetSigMask(PCR_allSigsBlocked,&GC_old_sig_mask)
+# define ENABLE_SIGNALS() \
+ PCR_Th_SetSigMask(&GC_old_sig_mask, NIL)
+# else
+# if defined(THREADS) || defined(AMIGA) \
+ || defined(MSWIN32) || defined(MSWINCE) || defined(MACOS) \
+ || defined(DJGPP) || defined(NO_SIGNALS)
+ /* Also useful for debugging. */
+ /* Should probably use thr_sigsetmask for GC_SOLARIS_THREADS. */
+# define DISABLE_SIGNALS()
+# define ENABLE_SIGNALS()
+# else
+# define DISABLE_SIGNALS() GC_disable_signals()
+ void GC_disable_signals();
+# define ENABLE_SIGNALS() GC_enable_signals()
+ void GC_enable_signals();
+# endif
+# endif
+
+/*
+ * Stop and restart mutator threads.
+ */
+# ifdef PCR
+# include "th/PCR_ThCtl.h"
+# define STOP_WORLD() \
+ PCR_ThCtl_SetExclusiveMode(PCR_ThCtl_ExclusiveMode_stopNormal, \
+ PCR_allSigsBlocked, \
+ PCR_waitForever)
+# define START_WORLD() \
+ PCR_ThCtl_SetExclusiveMode(PCR_ThCtl_ExclusiveMode_null, \
+ PCR_allSigsBlocked, \
+ PCR_waitForever);
+# else
+# if defined(GC_SOLARIS_THREADS) || defined(GC_WIN32_THREADS) \
+ || defined(GC_PTHREADS)
+ void GC_stop_world();
+ void GC_start_world();
+# define STOP_WORLD() GC_stop_world()
+# define START_WORLD() GC_start_world()
+# else
+# define STOP_WORLD()
+# define START_WORLD()
+# endif
+# endif
+
+/* Abandon ship */
+# ifdef PCR
+# define ABORT(s) PCR_Base_Panic(s)
+# else
+# ifdef SMALL_CONFIG
+# define ABORT(msg) abort();
+# else
+ GC_API void GC_abort GC_PROTO((GC_CONST char * msg));
+# define ABORT(msg) GC_abort(msg);
+# endif
+# endif
+
+/* Exit abnormally, but without making a mess (e.g. out of memory) */
+# ifdef PCR
+# define EXIT() PCR_Base_Exit(1,PCR_waitForever)
+# else
+# define EXIT() (void)exit(1)
+# endif
+
+/* Print warning message, e.g. almost out of memory. */
+# define WARN(msg,arg) (*GC_current_warn_proc)("GC Warning: " msg, (GC_word)(arg))
+extern GC_warn_proc GC_current_warn_proc;
+
+/* Get environment entry */
+#if !defined(NO_GETENV)
+# if defined(EMPTY_GETENV_RESULTS)
+ /* Workaround for a reputed Wine bug. */
+ static inline char * fixed_getenv(const char *name)
+ {
+ char * tmp = getenv(name);
+ if (tmp == 0 || strlen(tmp) == 0)
+ return 0;
+ return tmp;
+ }
+# define GETENV(name) fixed_getenv(name)
+# else
+# define GETENV(name) getenv(name)
+# endif
+#else
+# define GETENV(name) 0
+#endif
+
+#if defined(DARWIN)
+# if defined(POWERPC)
+# if CPP_WORDSZ == 32
+# define GC_THREAD_STATE_T ppc_thread_state_t
+# define GC_MACH_THREAD_STATE PPC_THREAD_STATE
+# define GC_MACH_THREAD_STATE_COUNT PPC_THREAD_STATE_COUNT
+# define GC_MACH_HEADER mach_header
+# define GC_MACH_SECTION section
+# else
+# define GC_THREAD_STATE_T ppc_thread_state64_t
+# define GC_MACH_THREAD_STATE PPC_THREAD_STATE64
+# define GC_MACH_THREAD_STATE_COUNT PPC_THREAD_STATE64_COUNT
+# define GC_MACH_HEADER mach_header_64
+# define GC_MACH_SECTION section_64
+# endif
+# elif defined(I386) || defined(X86_64)
+# if CPP_WORDSZ == 32
+# define GC_THREAD_STATE_T x86_thread_state32_t
+# define GC_MACH_THREAD_STATE x86_THREAD_STATE32
+# define GC_MACH_THREAD_STATE_COUNT x86_THREAD_STATE32_COUNT
+# define GC_MACH_HEADER mach_header
+# define GC_MACH_SECTION section
+# else
+# define GC_THREAD_STATE_T x86_thread_state64_t
+# define GC_MACH_THREAD_STATE x86_THREAD_STATE64
+# define GC_MACH_THREAD_STATE_COUNT x86_THREAD_STATE64_COUNT
+# define GC_MACH_HEADER mach_header_64
+# define GC_MACH_SECTION section_64
+# endif
+# else
+# error define GC_THREAD_STATE_T
+# define GC_MACH_THREAD_STATE MACHINE_THREAD_STATE
+# define GC_MACH_THREAD_STATE_COUNT MACHINE_THREAD_STATE_COUNT
+# endif
+/* Try to work out the right way to access thread state structure members.
+ The structure has changed its definition in different Darwin versions.
+ This now defaults to the (older) names without __, thus hopefully,
+ not breaking any existing Makefile.direct builds. */
+# if defined (HAS_PPC_THREAD_STATE___R0) \
+ || defined (HAS_PPC_THREAD_STATE64___R0) \
+ || defined (HAS_X86_THREAD_STATE32___EAX) \
+ || defined (HAS_X86_THREAD_STATE64___RAX)
+# define THREAD_FLD(x) __ ## x
+# else
+# define THREAD_FLD(x) x
+# endif
+#endif
+/*********************************/
+/* */
+/* Word-size-dependent defines */
+/* */
+/*********************************/
+
+#if CPP_WORDSZ == 32
+# define WORDS_TO_BYTES(x) ((x)<<2)
+# define BYTES_TO_WORDS(x) ((x)>>2)
+# define LOGWL ((word)5) /* log[2] of CPP_WORDSZ */
+# define modWORDSZ(n) ((n) & 0x1f) /* n mod size of word */
+# if ALIGNMENT != 4
+# define UNALIGNED
+# endif
+#endif
+
+#if CPP_WORDSZ == 64
+# define WORDS_TO_BYTES(x) ((x)<<3)
+# define BYTES_TO_WORDS(x) ((x)>>3)
+# define LOGWL ((word)6) /* log[2] of CPP_WORDSZ */
+# define modWORDSZ(n) ((n) & 0x3f) /* n mod size of word */
+# if ALIGNMENT != 8
+# define UNALIGNED
+# endif
+#endif
+
+#define WORDSZ ((word)CPP_WORDSZ)
+#define SIGNB ((word)1 << (WORDSZ-1))
+#define BYTES_PER_WORD ((word)(sizeof (word)))
+#define ONES ((word)(signed_word)(-1))
+#define divWORDSZ(n) ((n) >> LOGWL) /* divide n by size of word */
+
+/*********************/
+/* */
+/* Size Parameters */
+/* */
+/*********************/
+
+/* heap block size, bytes. Should be power of 2 */
+
+#ifndef HBLKSIZE
+# ifdef SMALL_CONFIG
+# define CPP_LOG_HBLKSIZE 10
+# else
+# if (CPP_WORDSZ == 32) || (defined(HPUX) && defined(HP_PA))
+ /* HPUX/PA seems to use 4K pages with the 64 bit ABI */
+# define CPP_LOG_HBLKSIZE 12
+# else
+# define CPP_LOG_HBLKSIZE 13
+# endif
+# endif
+#else
+# if HBLKSIZE == 512
+# define CPP_LOG_HBLKSIZE 9
+# endif
+# if HBLKSIZE == 1024
+# define CPP_LOG_HBLKSIZE 10
+# endif
+# if HBLKSIZE == 2048
+# define CPP_LOG_HBLKSIZE 11
+# endif
+# if HBLKSIZE == 4096
+# define CPP_LOG_HBLKSIZE 12
+# endif
+# if HBLKSIZE == 8192
+# define CPP_LOG_HBLKSIZE 13
+# endif
+# if HBLKSIZE == 16384
+# define CPP_LOG_HBLKSIZE 14
+# endif
+# ifndef CPP_LOG_HBLKSIZE
+ --> fix HBLKSIZE
+# endif
+# undef HBLKSIZE
+#endif
+# define CPP_HBLKSIZE (1 << CPP_LOG_HBLKSIZE)
+# define LOG_HBLKSIZE ((word)CPP_LOG_HBLKSIZE)
+# define HBLKSIZE ((word)CPP_HBLKSIZE)
+
+
+/* max size objects supported by freelist (larger objects may be */
+/* allocated, but less efficiently) */
+
+#define CPP_MAXOBJBYTES (CPP_HBLKSIZE/2)
+#define MAXOBJBYTES ((word)CPP_MAXOBJBYTES)
+#define CPP_MAXOBJSZ BYTES_TO_WORDS(CPP_MAXOBJBYTES)
+#define MAXOBJSZ ((word)CPP_MAXOBJSZ)
+
+# define divHBLKSZ(n) ((n) >> LOG_HBLKSIZE)
+
+# define HBLK_PTR_DIFF(p,q) divHBLKSZ((ptr_t)p - (ptr_t)q)
+ /* Equivalent to subtracting 2 hblk pointers. */
+ /* We do it this way because a compiler should */
+ /* find it hard to use an integer division */
+ /* instead of a shift. The bundled SunOS 4.1 */
+ /* o.w. sometimes pessimizes the subtraction to */
+ /* involve a call to .div. */
+
+# define modHBLKSZ(n) ((n) & (HBLKSIZE-1))
+
+# define HBLKPTR(objptr) ((struct hblk *)(((word) (objptr)) & ~(HBLKSIZE-1)))
+
+# define HBLKDISPL(objptr) (((word) (objptr)) & (HBLKSIZE-1))
+
+/* Round up byte allocation requests to integral number of words, etc. */
+# define ROUNDED_UP_WORDS(n) \
+ BYTES_TO_WORDS((n) + (WORDS_TO_BYTES(1) - 1 + EXTRA_BYTES))
+# ifdef ALIGN_DOUBLE
+# define ALIGNED_WORDS(n) \
+ (BYTES_TO_WORDS((n) + WORDS_TO_BYTES(2) - 1 + EXTRA_BYTES) & ~1)
+# else
+# define ALIGNED_WORDS(n) ROUNDED_UP_WORDS(n)
+# endif
+# define SMALL_OBJ(bytes) ((bytes) <= (MAXOBJBYTES - EXTRA_BYTES))
+# define ADD_SLOP(bytes) ((bytes) + EXTRA_BYTES)
+# ifndef MIN_WORDS
+ /* MIN_WORDS is the size of the smallest allocated object. */
+ /* 1 and 2 are the only valid values. */
+ /* 2 must be used if: */
+ /* - GC_gcj_malloc can be used for objects of requested */
+ /* size smaller than 2 words, or */
+ /* - USE_MARK_BYTES is defined. */
+# if defined(USE_MARK_BYTES) || defined(GC_GCJ_SUPPORT)
+# define MIN_WORDS 2 /* Smallest allocated object. */
+# else
+# define MIN_WORDS 1
+# endif
+# endif
+
+
+/*
+ * Hash table representation of sets of pages. This assumes it is
+ * OK to add spurious entries to sets.
+ * Used by black-listing code, and perhaps by dirty bit maintenance code.
+ */
+
+# ifdef LARGE_CONFIG
+# define LOG_PHT_ENTRIES 20 /* Collisions likely at 1M blocks, */
+ /* which is >= 4GB. Each table takes */
+ /* 128KB, some of which may never be */
+ /* touched. */
+# else
+# ifdef SMALL_CONFIG
+# define LOG_PHT_ENTRIES 14 /* Collisions are likely if heap grows */
+ /* to more than 16K hblks = 64MB. */
+ /* Each hash table occupies 2K bytes. */
+# else /* default "medium" configuration */
+# define LOG_PHT_ENTRIES 16 /* Collisions are likely if heap grows */
+ /* to more than 64K hblks >= 256MB. */
+ /* Each hash table occupies 8K bytes. */
+ /* Even for somewhat smaller heaps, */
+ /* say half that, collisions may be an */
+ /* issue because we blacklist */
+ /* addresses outside the heap. */
+# endif
+# endif
+# define PHT_ENTRIES ((word)1 << LOG_PHT_ENTRIES)
+# define PHT_SIZE (PHT_ENTRIES >> LOGWL)
+typedef word page_hash_table[PHT_SIZE];
+
+# define PHT_HASH(addr) ((((word)(addr)) >> LOG_HBLKSIZE) & (PHT_ENTRIES - 1))
+
+# define get_pht_entry_from_index(bl, index) \
+ (((bl)[divWORDSZ(index)] >> modWORDSZ(index)) & 1)
+# define set_pht_entry_from_index(bl, index) \
+ (bl)[divWORDSZ(index)] |= (word)1 << modWORDSZ(index)
+# define clear_pht_entry_from_index(bl, index) \
+ (bl)[divWORDSZ(index)] &= ~((word)1 << modWORDSZ(index))
+/* And a dumb but thread-safe version of set_pht_entry_from_index. */
+/* This sets (many) extra bits. */
+# define set_pht_entry_from_index_safe(bl, index) \
+ (bl)[divWORDSZ(index)] = ONES
+
+
+
+/********************************************/
+/* */
+/* H e a p B l o c k s */
+/* */
+/********************************************/
+
+/* heap block header */
+#define HBLKMASK (HBLKSIZE-1)
+
+#define BITS_PER_HBLK (CPP_HBLKSIZE * 8)
+
+#define MARK_BITS_PER_HBLK (BITS_PER_HBLK/CPP_WORDSZ)
+ /* upper bound */
+ /* We allocate 1 bit/word, unless USE_MARK_BYTES */
+ /* is defined. Only the first word */
+ /* in each object is actually marked. */
+
+# ifdef USE_MARK_BYTES
+# define MARK_BITS_SZ (MARK_BITS_PER_HBLK/2)
+ /* Unlike the other case, this is in units of bytes. */
+ /* We actually allocate only every second mark bit, since we */
+ /* force all objects to be doubleword aligned. */
+ /* However, each mark bit is allocated as a byte. */
+# else
+# define MARK_BITS_SZ (MARK_BITS_PER_HBLK/CPP_WORDSZ)
+# endif
+
+/* We maintain layout maps for heap blocks containing objects of a given */
+/* size. Each entry in this map describes a byte offset and has the */
+/* following type. */
+typedef unsigned char map_entry_type;
+
+struct hblkhdr {
+ word hb_sz; /* If in use, size in words, of objects in the block. */
+ /* if free, the size in bytes of the whole block */
+ struct hblk * hb_next; /* Link field for hblk free list */
+ /* and for lists of chunks waiting to be */
+ /* reclaimed. */
+ struct hblk * hb_prev; /* Backwards link for free list. */
+ word hb_descr; /* object descriptor for marking. See */
+ /* mark.h. */
+ map_entry_type * hb_map;
+ /* A pointer to a pointer validity map of the block. */
+ /* See GC_obj_map. */
+ /* Valid for all blocks with headers. */
+ /* Free blocks point to GC_invalid_map. */
+ unsigned char hb_obj_kind;
+ /* Kind of objects in the block. Each kind */
+ /* identifies a mark procedure and a set of */
+ /* list headers. Sometimes called regions. */
+ unsigned char hb_flags;
+# define IGNORE_OFF_PAGE 1 /* Ignore pointers that do not */
+ /* point to the first page of */
+ /* this object. */
+# define WAS_UNMAPPED 2 /* This is a free block, which has */
+ /* been unmapped from the address */
+ /* space. */
+ /* GC_remap must be invoked on it */
+ /* before it can be reallocated. */
+ /* Only set with USE_MUNMAP. */
+ unsigned short hb_last_reclaimed;
+ /* Value of GC_gc_no when block was */
+ /* last allocated or swept. May wrap. */
+ /* For a free block, this is maintained */
+ /* only for USE_MUNMAP, and indicates */
+ /* when the header was allocated, or */
+ /* when the size of the block last */
+ /* changed. */
+# ifdef USE_MARK_BYTES
+ union {
+ char _hb_marks[MARK_BITS_SZ];
+ /* The i'th byte is 1 if the object */
+ /* starting at word 2i is marked, 0 o.w. */
+ word dummy; /* Force word alignment of mark bytes. */
+ } _mark_byte_union;
+# define hb_marks _mark_byte_union._hb_marks
+# else
+ word hb_marks[MARK_BITS_SZ];
+ /* Bit i in the array refers to the */
+ /* object starting at the ith word (header */
+ /* INCLUDED) in the heap block. */
+ /* The lsb of word 0 is numbered 0. */
+ /* Unused bits are invalid, and are */
+ /* occasionally set, e.g for uncollectable */
+ /* objects. */
+# endif /* !USE_MARK_BYTES */
+};
+
+/* heap block body */
+
+# define BODY_SZ (HBLKSIZE/sizeof(word))
+
+struct hblk {
+ word hb_body[BODY_SZ];
+};
+
+# define HBLK_IS_FREE(hdr) ((hdr) -> hb_map == GC_invalid_map)
+
+# define OBJ_SZ_TO_BLOCKS(sz) \
+ divHBLKSZ(WORDS_TO_BYTES(sz) + HBLKSIZE-1)
+ /* Size of block (in units of HBLKSIZE) needed to hold objects of */
+ /* given sz (in words). */
+
+/* Object free list link */
+# define obj_link(p) (*(ptr_t *)(p))
+
+# define LOG_MAX_MARK_PROCS 6
+# define MAX_MARK_PROCS (1 << LOG_MAX_MARK_PROCS)
+
+/* Root sets. Logically private to mark_rts.c. But we don't want the */
+/* tables scanned, so we put them here. */
+/* MAX_ROOT_SETS is the maximum number of ranges that can be */
+/* registered as static roots. */
+# ifdef LARGE_CONFIG
+# define MAX_ROOT_SETS 4096
+# else
+ /* GCJ LOCAL: MAX_ROOT_SETS increased to permit more shared */
+ /* libraries to be loaded. */
+# define MAX_ROOT_SETS 1024
+# endif
+
+# define MAX_EXCLUSIONS (MAX_ROOT_SETS/4)
+/* Maximum number of segments that can be excluded from root sets. */
+
+/*
+ * Data structure for excluded static roots.
+ */
+struct exclusion {
+ ptr_t e_start;
+ ptr_t e_end;
+};
+
+/* Data structure for list of root sets. */
+/* We keep a hash table, so that we can filter out duplicate additions. */
+/* Under Win32, we need to do a better job of filtering overlaps, so */
+/* we resort to sequential search, and pay the price. */
+struct roots {
+ ptr_t r_start;
+ ptr_t r_end;
+# if !defined(MSWIN32) && !defined(MSWINCE)
+ struct roots * r_next;
+# endif
+ GC_bool r_tmp;
+ /* Delete before registering new dynamic libraries */
+};
+
+#if !defined(MSWIN32) && !defined(MSWINCE)
+ /* Size of hash table index to roots. */
+# define LOG_RT_SIZE 6
+# define RT_SIZE (1 << LOG_RT_SIZE) /* Power of 2, may be != MAX_ROOT_SETS */
+#endif
+
+/* Lists of all heap blocks and free lists */
+/* as well as other random data structures */
+/* that should not be scanned by the */
+/* collector. */
+/* These are grouped together in a struct */
+/* so that they can be easily skipped by the */
+/* GC_mark routine. */
+/* The ordering is weird to make GC_malloc */
+/* faster by keeping the important fields */
+/* sufficiently close together that a */
+/* single load of a base register will do. */
+/* Scalars that could easily appear to */
+/* be pointers are also put here. */
+/* The main fields should precede any */
+/* conditionally included fields, so that */
+/* gc_inl.h will work even if a different set */
+/* of macros is defined when the client is */
+/* compiled. */
+
+struct _GC_arrays {
+ word _heapsize;
+ word _max_heapsize;
+ word _requested_heapsize; /* Heap size due to explicit expansion */
+ ptr_t _last_heap_addr;
+ ptr_t _prev_heap_addr;
+ word _large_free_bytes;
+ /* Total bytes contained in blocks on large object free */
+ /* list. */
+ word _large_allocd_bytes;
+ /* Total number of bytes in allocated large objects blocks. */
+ /* For the purposes of this counter and the next one only, a */
+ /* large object is one that occupies a block of at least */
+ /* 2*HBLKSIZE. */
+ word _max_large_allocd_bytes;
+ /* Maximum number of bytes that were ever allocated in */
+ /* large object blocks. This is used to help decide when it */
+ /* is safe to split up a large block. */
+ word _words_allocd_before_gc;
+ /* Number of words allocated before this */
+ /* collection cycle. */
+# ifndef SEPARATE_GLOBALS
+ word _words_allocd;
+ /* Number of words allocated during this collection cycle */
+# endif
+ word _words_wasted;
+ /* Number of words wasted due to internal fragmentation */
+ /* in large objects, or due to dropping blacklisted */
+ /* blocks, since last gc. Approximate. */
+ word _words_finalized;
+ /* Approximate number of words in objects (and headers) */
+ /* That became ready for finalization in the last */
+ /* collection. */
+ word _non_gc_bytes_at_gc;
+ /* Number of explicitly managed bytes of storage */
+ /* at last collection. */
+ word _mem_freed;
+ /* Number of explicitly deallocated words of memory */
+ /* since last collection. */
+ word _finalizer_mem_freed;
+ /* Words of memory explicitly deallocated while */
+ /* finalizers were running. Used to approximate mem. */
+ /* explicitly deallocated by finalizers. */
+ ptr_t _scratch_end_ptr;
+ ptr_t _scratch_last_end_ptr;
+ /* Used by headers.c, and can easily appear to point to */
+ /* heap. */
+ GC_mark_proc _mark_procs[MAX_MARK_PROCS];
+ /* Table of user-defined mark procedures. There is */
+ /* a small number of these, which can be referenced */
+ /* by DS_PROC mark descriptors. See gc_mark.h. */
+
+# ifndef SEPARATE_GLOBALS
+ ptr_t _objfreelist[MAXOBJSZ+1];
+ /* free list for objects */
+ ptr_t _aobjfreelist[MAXOBJSZ+1];
+ /* free list for atomic objs */
+# endif
+
+ ptr_t _uobjfreelist[MAXOBJSZ+1];
+ /* uncollectable but traced objs */
+ /* objects on this and auobjfreelist */
+ /* are always marked, except during */
+ /* garbage collections. */
+# ifdef ATOMIC_UNCOLLECTABLE
+ ptr_t _auobjfreelist[MAXOBJSZ+1];
+# endif
+ /* uncollectable but traced objs */
+
+# ifdef GATHERSTATS
+ word _composite_in_use;
+ /* Number of words in accessible composite */
+ /* objects. */
+ word _atomic_in_use;
+ /* Number of words in accessible atomic */
+ /* objects. */
+# endif
+# ifdef USE_MUNMAP
+ word _unmapped_bytes;
+# endif
+# ifdef MERGE_SIZES
+ unsigned _size_map[WORDS_TO_BYTES(MAXOBJSZ+1)];
+ /* Number of words to allocate for a given allocation request in */
+ /* bytes. */
+# endif
+
+# ifdef STUBBORN_ALLOC
+ ptr_t _sobjfreelist[MAXOBJSZ+1];
+# endif
+ /* free list for immutable objects */
+ map_entry_type * _obj_map[MAXOBJSZ+1];
+ /* If not NIL, then a pointer to a map of valid */
+ /* object addresses. _obj_map[sz][i] is j if the */
+ /* address block_start+i is a valid pointer */
+ /* to an object at block_start + */
+ /* WORDS_TO_BYTES(BYTES_TO_WORDS(i) - j) */
+ /* I.e. j is a word displacement from the */
+ /* object beginning. */
+ /* The entry is OBJ_INVALID if the corresponding */
+ /* address is not a valid pointer. It is */
+ /* OFFSET_TOO_BIG if the value j would be too */
+ /* large to fit in the entry. (Note that the */
+ /* size of these entries matters, both for */
+ /* space consumption and for cache utilization.) */
+# define OFFSET_TOO_BIG 0xfe
+# define OBJ_INVALID 0xff
+# define MAP_ENTRY(map, bytes) (map)[bytes]
+# define MAP_ENTRIES HBLKSIZE
+# define MAP_SIZE MAP_ENTRIES
+# define CPP_MAX_OFFSET (OFFSET_TOO_BIG - 1)
+# define MAX_OFFSET ((word)CPP_MAX_OFFSET)
+ /* The following are used only if GC_all_interior_ptrs != 0 */
+# define VALID_OFFSET_SZ \
+ (CPP_MAX_OFFSET > WORDS_TO_BYTES(CPP_MAXOBJSZ)? \
+ CPP_MAX_OFFSET+1 \
+ : WORDS_TO_BYTES(CPP_MAXOBJSZ)+1)
+ char _valid_offsets[VALID_OFFSET_SZ];
+ /* GC_valid_offsets[i] == TRUE ==> i */
+ /* is registered as a displacement. */
+ char _modws_valid_offsets[sizeof(word)];
+ /* GC_valid_offsets[i] ==> */
+ /* GC_modws_valid_offsets[i%sizeof(word)] */
+# define OFFSET_VALID(displ) \
+ (GC_all_interior_pointers || GC_valid_offsets[displ])
+# ifdef STUBBORN_ALLOC
+ page_hash_table _changed_pages;
+ /* Stubborn object pages that were changes since last call to */
+ /* GC_read_changed. */
+ page_hash_table _prev_changed_pages;
+ /* Stubborn object pages that were changes before last call to */
+ /* GC_read_changed. */
+# endif
+# if defined(PROC_VDB) || defined(MPROTECT_VDB)
+ page_hash_table _grungy_pages; /* Pages that were dirty at last */
+ /* GC_read_dirty. */
+# endif
+# ifdef MPROTECT_VDB
+ VOLATILE page_hash_table _dirty_pages;
+ /* Pages dirtied since last GC_read_dirty. */
+# endif
+# ifdef PROC_VDB
+ page_hash_table _written_pages; /* Pages ever dirtied */
+# endif
+# ifdef LARGE_CONFIG
+# if CPP_WORDSZ > 32
+# define MAX_HEAP_SECTS 4096 /* overflows at roughly 64 GB */
+# else
+# define MAX_HEAP_SECTS 768 /* Separately added heap sections. */
+# endif
+# else
+# ifdef SMALL_CONFIG
+# define MAX_HEAP_SECTS 128 /* Roughly 256MB (128*2048*1K) */
+# else
+# define MAX_HEAP_SECTS 384 /* Roughly 3GB */
+# endif
+# endif
+ struct HeapSect {
+ ptr_t hs_start; word hs_bytes;
+ } _heap_sects[MAX_HEAP_SECTS];
+# if defined(MSWIN32) || defined(MSWINCE)
+ ptr_t _heap_bases[MAX_HEAP_SECTS];
+ /* Start address of memory regions obtained from kernel. */
+# endif
+# ifdef MSWINCE
+ word _heap_lengths[MAX_HEAP_SECTS];
+ /* Commited lengths of memory regions obtained from kernel. */
+# endif
+ struct roots _static_roots[MAX_ROOT_SETS];
+# if !defined(MSWIN32) && !defined(MSWINCE)
+ struct roots * _root_index[RT_SIZE];
+# endif
+ struct exclusion _excl_table[MAX_EXCLUSIONS];
+ /* Block header index; see gc_headers.h */
+ bottom_index * _all_nils;
+ bottom_index * _top_index [TOP_SZ];
+#ifdef SAVE_CALL_CHAIN
+ struct callinfo _last_stack[NFRAMES]; /* Stack at last garbage collection.*/
+ /* Useful for debugging mysterious */
+ /* object disappearances. */
+ /* In the multithreaded case, we */
+ /* currently only save the calling */
+ /* stack. */
+#endif
+};
+
+GC_API GC_FAR struct _GC_arrays GC_arrays;
+
+# ifndef SEPARATE_GLOBALS
+# define GC_objfreelist GC_arrays._objfreelist
+# define GC_aobjfreelist GC_arrays._aobjfreelist
+# define GC_words_allocd GC_arrays._words_allocd
+# endif
+# define GC_uobjfreelist GC_arrays._uobjfreelist
+# ifdef ATOMIC_UNCOLLECTABLE
+# define GC_auobjfreelist GC_arrays._auobjfreelist
+# endif
+# define GC_sobjfreelist GC_arrays._sobjfreelist
+# define GC_valid_offsets GC_arrays._valid_offsets
+# define GC_modws_valid_offsets GC_arrays._modws_valid_offsets
+# ifdef STUBBORN_ALLOC
+# define GC_changed_pages GC_arrays._changed_pages
+# define GC_prev_changed_pages GC_arrays._prev_changed_pages
+# endif
+# define GC_obj_map GC_arrays._obj_map
+# define GC_last_heap_addr GC_arrays._last_heap_addr
+# define GC_prev_heap_addr GC_arrays._prev_heap_addr
+# define GC_words_wasted GC_arrays._words_wasted
+# define GC_large_free_bytes GC_arrays._large_free_bytes
+# define GC_large_allocd_bytes GC_arrays._large_allocd_bytes
+# define GC_max_large_allocd_bytes GC_arrays._max_large_allocd_bytes
+# define GC_words_finalized GC_arrays._words_finalized
+# define GC_non_gc_bytes_at_gc GC_arrays._non_gc_bytes_at_gc
+# define GC_mem_freed GC_arrays._mem_freed
+# define GC_finalizer_mem_freed GC_arrays._finalizer_mem_freed
+# define GC_scratch_end_ptr GC_arrays._scratch_end_ptr
+# define GC_scratch_last_end_ptr GC_arrays._scratch_last_end_ptr
+# define GC_mark_procs GC_arrays._mark_procs
+# define GC_heapsize GC_arrays._heapsize
+# define GC_max_heapsize GC_arrays._max_heapsize
+# define GC_requested_heapsize GC_arrays._requested_heapsize
+# define GC_words_allocd_before_gc GC_arrays._words_allocd_before_gc
+# define GC_heap_sects GC_arrays._heap_sects
+# define GC_last_stack GC_arrays._last_stack
+# ifdef USE_MUNMAP
+# define GC_unmapped_bytes GC_arrays._unmapped_bytes
+# endif
+# if defined(MSWIN32) || defined(MSWINCE)
+# define GC_heap_bases GC_arrays._heap_bases
+# endif
+# ifdef MSWINCE
+# define GC_heap_lengths GC_arrays._heap_lengths
+# endif
+# define GC_static_roots GC_arrays._static_roots
+# define GC_root_index GC_arrays._root_index
+# define GC_excl_table GC_arrays._excl_table
+# define GC_all_nils GC_arrays._all_nils
+# define GC_top_index GC_arrays._top_index
+# if defined(PROC_VDB) || defined(MPROTECT_VDB)
+# define GC_grungy_pages GC_arrays._grungy_pages
+# endif
+# ifdef MPROTECT_VDB
+# define GC_dirty_pages GC_arrays._dirty_pages
+# endif
+# ifdef PROC_VDB
+# define GC_written_pages GC_arrays._written_pages
+# endif
+# ifdef GATHERSTATS
+# define GC_composite_in_use GC_arrays._composite_in_use
+# define GC_atomic_in_use GC_arrays._atomic_in_use
+# endif
+# ifdef MERGE_SIZES
+# define GC_size_map GC_arrays._size_map
+# endif
+
+# define beginGC_arrays ((ptr_t)(&GC_arrays))
+# define endGC_arrays (((ptr_t)(&GC_arrays)) + (sizeof GC_arrays))
+
+#define USED_HEAP_SIZE (GC_heapsize - GC_large_free_bytes)
+
+/* Object kinds: */
+# define MAXOBJKINDS 16
+
+extern struct obj_kind {
+ ptr_t *ok_freelist; /* Array of free listheaders for this kind of object */
+ /* Point either to GC_arrays or to storage allocated */
+ /* with GC_scratch_alloc. */
+ struct hblk **ok_reclaim_list;
+ /* List headers for lists of blocks waiting to be */
+ /* swept. */
+ word ok_descriptor; /* Descriptor template for objects in this */
+ /* block. */
+ GC_bool ok_relocate_descr;
+ /* Add object size in bytes to descriptor */
+ /* template to obtain descriptor. Otherwise */
+ /* template is used as is. */
+ GC_bool ok_init; /* Clear objects before putting them on the free list. */
+} GC_obj_kinds[MAXOBJKINDS];
+
+# define beginGC_obj_kinds ((ptr_t)(&GC_obj_kinds))
+# define endGC_obj_kinds (beginGC_obj_kinds + (sizeof GC_obj_kinds))
+
+/* Variables that used to be in GC_arrays, but need to be accessed by */
+/* inline allocation code. If they were in GC_arrays, the inlined */
+/* allocation code would include GC_arrays offsets (as it did), which */
+/* introduce maintenance problems. */
+
+#ifdef SEPARATE_GLOBALS
+ word GC_words_allocd;
+ /* Number of words allocated during this collection cycle */
+ ptr_t GC_objfreelist[MAXOBJSZ+1];
+ /* free list for NORMAL objects */
+# define beginGC_objfreelist ((ptr_t)(&GC_objfreelist))
+# define endGC_objfreelist (beginGC_objfreelist + sizeof(GC_objfreelist))
+
+ ptr_t GC_aobjfreelist[MAXOBJSZ+1];
+ /* free list for atomic (PTRFREE) objs */
+# define beginGC_aobjfreelist ((ptr_t)(&GC_aobjfreelist))
+# define endGC_aobjfreelist (beginGC_aobjfreelist + sizeof(GC_aobjfreelist))
+#endif
+
+/* Predefined kinds: */
+# define PTRFREE 0
+# define NORMAL 1
+# define UNCOLLECTABLE 2
+# ifdef ATOMIC_UNCOLLECTABLE
+# define AUNCOLLECTABLE 3
+# define STUBBORN 4
+# define IS_UNCOLLECTABLE(k) (((k) & ~1) == UNCOLLECTABLE)
+# else
+# define STUBBORN 3
+# define IS_UNCOLLECTABLE(k) ((k) == UNCOLLECTABLE)
+# endif
+
+extern int GC_n_kinds;
+
+GC_API word GC_fo_entries;
+
+extern word GC_n_heap_sects; /* Number of separately added heap */
+ /* sections. */
+
+extern word GC_page_size;
+
+# if defined(MSWIN32) || defined(MSWINCE)
+ struct _SYSTEM_INFO;
+ extern struct _SYSTEM_INFO GC_sysinfo;
+ extern word GC_n_heap_bases; /* See GC_heap_bases. */
+# endif
+
+extern word GC_total_stack_black_listed;
+ /* Number of bytes on stack blacklist. */
+
+extern word GC_black_list_spacing;
+ /* Average number of bytes between blacklisted */
+ /* blocks. Approximate. */
+ /* Counts only blocks that are */
+ /* "stack-blacklisted", i.e. that are */
+ /* problematic in the interior of an object. */
+
+extern map_entry_type * GC_invalid_map;
+ /* Pointer to the nowhere valid hblk map */
+ /* Blocks pointing to this map are free. */
+
+extern struct hblk * GC_hblkfreelist[];
+ /* List of completely empty heap blocks */
+ /* Linked through hb_next field of */
+ /* header structure associated with */
+ /* block. */
+
+extern GC_bool GC_objects_are_marked; /* There are marked objects in */
+ /* the heap. */
+
+#ifndef SMALL_CONFIG
+ extern GC_bool GC_incremental;
+ /* Using incremental/generational collection. */
+# define TRUE_INCREMENTAL \
+ (GC_incremental && GC_time_limit != GC_TIME_UNLIMITED)
+ /* True incremental, not just generational, mode */
+#else
+# define GC_incremental FALSE
+ /* Hopefully allow optimizer to remove some code. */
+# define TRUE_INCREMENTAL FALSE
+#endif
+
+extern GC_bool GC_dirty_maintained;
+ /* Dirty bits are being maintained, */
+ /* either for incremental collection, */
+ /* or to limit the root set. */
+
+extern word GC_root_size; /* Total size of registered root sections */
+
+extern GC_bool GC_debugging_started; /* GC_debug_malloc has been called. */
+
+extern long GC_large_alloc_warn_interval;
+ /* Interval between unsuppressed warnings. */
+
+extern long GC_large_alloc_warn_suppressed;
+ /* Number of warnings suppressed so far. */
+
+#ifdef THREADS
+ extern GC_bool GC_world_stopped;
+#endif
+
+/* Operations */
+# ifndef abs
+# define abs(x) ((x) < 0? (-(x)) : (x))
+# endif
+
+
+/* Marks are in a reserved area in */
+/* each heap block. Each word has one mark bit associated */
+/* with it. Only those corresponding to the beginning of an */
+/* object are used. */
+
+/* Set mark bit correctly, even if mark bits may be concurrently */
+/* accessed. */
+#ifdef PARALLEL_MARK
+# define OR_WORD(addr, bits) \
+ { word old; \
+ do { \
+ old = *((volatile word *)addr); \
+ } while (!GC_compare_and_exchange((addr), old, old | (bits))); \
+ }
+# define OR_WORD_EXIT_IF_SET(addr, bits, exit_label) \
+ { word old; \
+ word my_bits = (bits); \
+ do { \
+ old = *((volatile word *)addr); \
+ if (old & my_bits) goto exit_label; \
+ } while (!GC_compare_and_exchange((addr), old, old | my_bits)); \
+ }
+#else
+# define OR_WORD(addr, bits) *(addr) |= (bits)
+# define OR_WORD_EXIT_IF_SET(addr, bits, exit_label) \
+ { \
+ word old = *(addr); \
+ word my_bits = (bits); \
+ if (old & my_bits) goto exit_label; \
+ *(addr) = (old | my_bits); \
+ }
+#endif
+
+/* Mark bit operations */
+
+/*
+ * Retrieve, set, clear the mark bit corresponding
+ * to the nth word in a given heap block.
+ *
+ * (Recall that bit n corresponds to object beginning at word n
+ * relative to the beginning of the block, including unused words)
+ */
+
+#ifdef USE_MARK_BYTES
+# define mark_bit_from_hdr(hhdr,n) ((hhdr)->hb_marks[(n) >> 1])
+# define set_mark_bit_from_hdr(hhdr,n) ((hhdr)->hb_marks[(n)>>1]) = 1
+# define clear_mark_bit_from_hdr(hhdr,n) ((hhdr)->hb_marks[(n)>>1]) = 0
+#else /* !USE_MARK_BYTES */
+# define mark_bit_from_hdr(hhdr,n) (((hhdr)->hb_marks[divWORDSZ(n)] \
+ >> (modWORDSZ(n))) & (word)1)
+# define set_mark_bit_from_hdr(hhdr,n) \
+ OR_WORD((hhdr)->hb_marks+divWORDSZ(n), \
+ (word)1 << modWORDSZ(n))
+# define clear_mark_bit_from_hdr(hhdr,n) (hhdr)->hb_marks[divWORDSZ(n)] \
+ &= ~((word)1 << modWORDSZ(n))
+#endif /* !USE_MARK_BYTES */
+
+/* Important internal collector routines */
+
+ptr_t GC_approx_sp GC_PROTO((void));
+
+GC_bool GC_should_collect GC_PROTO((void));
+
+void GC_apply_to_all_blocks GC_PROTO(( \
+ void (*fn) GC_PROTO((struct hblk *h, word client_data)), \
+ word client_data));
+ /* Invoke fn(hbp, client_data) for each */
+ /* allocated heap block. */
+struct hblk * GC_next_used_block GC_PROTO((struct hblk * h));
+ /* Return first in-use block >= h */
+struct hblk * GC_prev_block GC_PROTO((struct hblk * h));
+ /* Return last block <= h. Returned block */
+ /* is managed by GC, but may or may not be in */
+ /* use. */
+void GC_mark_init GC_PROTO((void));
+void GC_clear_marks GC_PROTO((void)); /* Clear mark bits for all heap objects. */
+void GC_invalidate_mark_state GC_PROTO((void));
+ /* Tell the marker that marked */
+ /* objects may point to unmarked */
+ /* ones, and roots may point to */
+ /* unmarked objects. */
+ /* Reset mark stack. */
+GC_bool GC_mark_stack_empty GC_PROTO((void));
+GC_bool GC_mark_some GC_PROTO((ptr_t cold_gc_frame));
+ /* Perform about one pages worth of marking */
+ /* work of whatever kind is needed. Returns */
+ /* quickly if no collection is in progress. */
+ /* Return TRUE if mark phase finished. */
+void GC_initiate_gc GC_PROTO((void));
+ /* initiate collection. */
+ /* If the mark state is invalid, this */
+ /* becomes full colleection. Otherwise */
+ /* it's partial. */
+void GC_push_all GC_PROTO((ptr_t bottom, ptr_t top));
+ /* Push everything in a range */
+ /* onto mark stack. */
+void GC_push_selected GC_PROTO(( \
+ ptr_t bottom, \
+ ptr_t top, \
+ int (*dirty_fn) GC_PROTO((struct hblk *h)), \
+ void (*push_fn) GC_PROTO((ptr_t bottom, ptr_t top)) ));
+ /* Push all pages h in [b,t) s.t. */
+ /* select_fn(h) != 0 onto mark stack. */
+#ifndef SMALL_CONFIG
+ void GC_push_conditional GC_PROTO((ptr_t b, ptr_t t, GC_bool all));
+#else
+# define GC_push_conditional(b, t, all) GC_push_all(b, t)
+#endif
+ /* Do either of the above, depending */
+ /* on the third arg. */
+void GC_push_all_stack GC_PROTO((ptr_t b, ptr_t t));
+ /* As above, but consider */
+ /* interior pointers as valid */
+void GC_push_all_eager GC_PROTO((ptr_t b, ptr_t t));
+ /* Same as GC_push_all_stack, but */
+ /* ensures that stack is scanned */
+ /* immediately, not just scheduled */
+ /* for scanning. */
+#ifndef THREADS
+ void GC_push_all_stack_partially_eager GC_PROTO(( \
+ ptr_t bottom, ptr_t top, ptr_t cold_gc_frame ));
+ /* Similar to GC_push_all_eager, but only the */
+ /* part hotter than cold_gc_frame is scanned */
+ /* immediately. Needed to ensure that callee- */
+ /* save registers are not missed. */
+#else
+ /* In the threads case, we push part of the current thread stack */
+ /* with GC_push_all_eager when we push the registers. This gets the */
+ /* callee-save registers that may disappear. The remainder of the */
+ /* stacks are scheduled for scanning in *GC_push_other_roots, which */
+ /* is thread-package-specific. */
+#endif
+void GC_push_current_stack GC_PROTO((ptr_t cold_gc_frame));
+ /* Push enough of the current stack eagerly to */
+ /* ensure that callee-save registers saved in */
+ /* GC frames are scanned. */
+ /* In the non-threads case, schedule entire */
+ /* stack for scanning. */
+void GC_push_roots GC_PROTO((GC_bool all, ptr_t cold_gc_frame));
+ /* Push all or dirty roots. */
+extern void (*GC_push_other_roots) GC_PROTO((void));
+ /* Push system or application specific roots */
+ /* onto the mark stack. In some environments */
+ /* (e.g. threads environments) this is */
+ /* predfined to be non-zero. A client supplied */
+ /* replacement should also call the original */
+ /* function. */
+extern void GC_push_gc_structures GC_PROTO((void));
+ /* Push GC internal roots. These are normally */
+ /* included in the static data segment, and */
+ /* Thus implicitly pushed. But we must do this */
+ /* explicitly if normal root processing is */
+ /* disabled. Calls the following: */
+ extern void GC_push_finalizer_structures GC_PROTO((void));
+ extern void GC_push_stubborn_structures GC_PROTO((void));
+# ifdef THREADS
+ extern void GC_push_thread_structures GC_PROTO((void));
+# endif
+extern void (*GC_start_call_back) GC_PROTO((void));
+ /* Called at start of full collections. */
+ /* Not called if 0. Called with allocation */
+ /* lock held. */
+ /* 0 by default. */
+# if defined(USE_GENERIC_PUSH_REGS)
+ void GC_generic_push_regs GC_PROTO((ptr_t cold_gc_frame));
+# else
+ void GC_push_regs GC_PROTO((void));
+# endif
+# if defined(SPARC) || defined(IA64)
+ /* Cause all stacked registers to be saved in memory. Return a */
+ /* pointer to the top of the corresponding memory stack. */
+ word GC_save_regs_in_stack GC_PROTO((void));
+# endif
+ /* Push register contents onto mark stack. */
+ /* If NURSERY is defined, the default push */
+ /* action can be overridden with GC_push_proc */
+
+# ifdef NURSERY
+ extern void (*GC_push_proc)(ptr_t);
+# endif
+# if defined(MSWIN32) || defined(MSWINCE)
+ void __cdecl GC_push_one GC_PROTO((word p));
+# else
+ void GC_push_one GC_PROTO((word p));
+ /* If p points to an object, mark it */
+ /* and push contents on the mark stack */
+ /* Pointer recognition test always */
+ /* accepts interior pointers, i.e. this */
+ /* is appropriate for pointers found on */
+ /* stack. */
+# endif
+# if defined(PRINT_BLACK_LIST) || defined(KEEP_BACK_PTRS)
+ void GC_mark_and_push_stack GC_PROTO((word p, ptr_t source));
+ /* Ditto, omits plausibility test */
+# else
+ void GC_mark_and_push_stack GC_PROTO((word p));
+# endif
+void GC_push_marked GC_PROTO((struct hblk * h, hdr * hhdr));
+ /* Push contents of all marked objects in h onto */
+ /* mark stack. */
+#ifdef SMALL_CONFIG
+# define GC_push_next_marked_dirty(h) GC_push_next_marked(h)
+#else
+ struct hblk * GC_push_next_marked_dirty GC_PROTO((struct hblk * h));
+ /* Invoke GC_push_marked on next dirty block above h. */
+ /* Return a pointer just past the end of this block. */
+#endif /* !SMALL_CONFIG */
+struct hblk * GC_push_next_marked GC_PROTO((struct hblk * h));
+ /* Ditto, but also mark from clean pages. */
+struct hblk * GC_push_next_marked_uncollectable GC_PROTO((struct hblk * h));
+ /* Ditto, but mark only from uncollectable pages. */
+GC_bool GC_stopped_mark GC_PROTO((GC_stop_func stop_func));
+ /* Stop world and mark from all roots */
+ /* and rescuers. */
+void GC_clear_hdr_marks GC_PROTO((hdr * hhdr));
+ /* Clear the mark bits in a header */
+void GC_set_hdr_marks GC_PROTO((hdr * hhdr));
+ /* Set the mark bits in a header */
+void GC_set_fl_marks GC_PROTO((ptr_t p));
+ /* Set all mark bits associated with */
+ /* a free list. */
+void GC_add_roots_inner GC_PROTO((char * b, char * e, GC_bool tmp));
+void GC_remove_roots_inner GC_PROTO((char * b, char * e));
+GC_bool GC_is_static_root GC_PROTO((ptr_t p));
+ /* Is the address p in one of the registered static */
+ /* root sections? */
+# if defined(MSWIN32) || defined(_WIN32_WCE_EMULATION)
+GC_bool GC_is_tmp_root GC_PROTO((ptr_t p));
+ /* Is the address p in one of the temporary static */
+ /* root sections? */
+# endif
+void GC_register_dynamic_libraries GC_PROTO((void));
+ /* Add dynamic library data sections to the root set. */
+
+GC_bool GC_register_main_static_data GC_PROTO((void));
+ /* We need to register the main data segment. Returns */
+ /* TRUE unless this is done implicitly as part of */
+ /* dynamic library registration. */
+
+/* Machine dependent startup routines */
+ptr_t GC_get_stack_base GC_PROTO((void)); /* Cold end of stack */
+#ifdef IA64
+ ptr_t GC_get_register_stack_base GC_PROTO((void));
+ /* Cold end of register stack. */
+#endif
+void GC_register_data_segments GC_PROTO((void));
+
+/* Black listing: */
+void GC_bl_init GC_PROTO((void));
+# ifdef PRINT_BLACK_LIST
+ void GC_add_to_black_list_normal GC_PROTO((word p, ptr_t source));
+ /* Register bits as a possible future false */
+ /* reference from the heap or static data */
+# define GC_ADD_TO_BLACK_LIST_NORMAL(bits, source) \
+ if (GC_all_interior_pointers) { \
+ GC_add_to_black_list_stack(bits, (ptr_t)(source)); \
+ } else { \
+ GC_add_to_black_list_normal(bits, (ptr_t)(source)); \
+ }
+# else
+ void GC_add_to_black_list_normal GC_PROTO((word p));
+# define GC_ADD_TO_BLACK_LIST_NORMAL(bits, source) \
+ if (GC_all_interior_pointers) { \
+ GC_add_to_black_list_stack(bits); \
+ } else { \
+ GC_add_to_black_list_normal(bits); \
+ }
+# endif
+
+# ifdef PRINT_BLACK_LIST
+ void GC_add_to_black_list_stack GC_PROTO((word p, ptr_t source));
+# else
+ void GC_add_to_black_list_stack GC_PROTO((word p));
+# endif
+struct hblk * GC_is_black_listed GC_PROTO((struct hblk * h, word len));
+ /* If there are likely to be false references */
+ /* to a block starting at h of the indicated */
+ /* length, then return the next plausible */
+ /* starting location for h that might avoid */
+ /* these false references. */
+void GC_promote_black_lists GC_PROTO((void));
+ /* Declare an end to a black listing phase. */
+void GC_unpromote_black_lists GC_PROTO((void));
+ /* Approximately undo the effect of the above. */
+ /* This actually loses some information, but */
+ /* only in a reasonably safe way. */
+word GC_number_stack_black_listed GC_PROTO(( \
+ struct hblk *start, struct hblk *endp1));
+ /* Return the number of (stack) blacklisted */
+ /* blocks in the range for statistical */
+ /* purposes. */
+
+ptr_t GC_scratch_alloc GC_PROTO((word bytes));
+ /* GC internal memory allocation for */
+ /* small objects. Deallocation is not */
+ /* possible. */
+
+/* Heap block layout maps: */
+void GC_invalidate_map GC_PROTO((hdr * hhdr));
+ /* Remove the object map associated */
+ /* with the block. This identifies */
+ /* the block as invalid to the mark */
+ /* routines. */
+GC_bool GC_add_map_entry GC_PROTO((word sz));
+ /* Add a heap block map for objects of */
+ /* size sz to obj_map. */
+ /* Return FALSE on failure. */
+void GC_register_displacement_inner GC_PROTO((word offset));
+ /* Version of GC_register_displacement */
+ /* that assumes lock is already held */
+ /* and signals are already disabled. */
+
+/* hblk allocation: */
+void GC_new_hblk GC_PROTO((word size_in_words, int kind));
+ /* Allocate a new heap block, and build */
+ /* a free list in it. */
+
+ptr_t GC_build_fl GC_PROTO((struct hblk *h, word sz,
+ GC_bool clear, ptr_t list));
+ /* Build a free list for objects of */
+ /* size sz in block h. Append list to */
+ /* end of the free lists. Possibly */
+ /* clear objects on the list. Normally */
+ /* called by GC_new_hblk, but also */
+ /* called explicitly without GC lock. */
+
+struct hblk * GC_allochblk GC_PROTO(( \
+ word size_in_words, int kind, unsigned flags));
+ /* Allocate a heap block, inform */
+ /* the marker that block is valid */
+ /* for objects of indicated size. */
+
+ptr_t GC_alloc_large GC_PROTO((word lw, int k, unsigned flags));
+ /* Allocate a large block of size lw words. */
+ /* The block is not cleared. */
+ /* Flags is 0 or IGNORE_OFF_PAGE. */
+ /* Calls GC_allchblk to do the actual */
+ /* allocation, but also triggers GC and/or */
+ /* heap expansion as appropriate. */
+ /* Does not update GC_words_allocd, but does */
+ /* other accounting. */
+
+ptr_t GC_alloc_large_and_clear GC_PROTO((word lw, int k, unsigned flags));
+ /* As above, but clear block if appropriate */
+ /* for kind k. */
+
+void GC_freehblk GC_PROTO((struct hblk * p));
+ /* Deallocate a heap block and mark it */
+ /* as invalid. */
+
+/* Misc GC: */
+void GC_init_inner GC_PROTO((void));
+GC_bool GC_expand_hp_inner GC_PROTO((word n));
+void GC_start_reclaim GC_PROTO((int abort_if_found));
+ /* Restore unmarked objects to free */
+ /* lists, or (if abort_if_found is */
+ /* TRUE) report them. */
+ /* Sweeping of small object pages is */
+ /* largely deferred. */
+void GC_continue_reclaim GC_PROTO((word sz, int kind));
+ /* Sweep pages of the given size and */
+ /* kind, as long as possible, and */
+ /* as long as the corr. free list is */
+ /* empty. */
+void GC_reclaim_or_delete_all GC_PROTO((void));
+ /* Arrange for all reclaim lists to be */
+ /* empty. Judiciously choose between */
+ /* sweeping and discarding each page. */
+GC_bool GC_reclaim_all GC_PROTO((GC_stop_func stop_func, GC_bool ignore_old));
+ /* Reclaim all blocks. Abort (in a */
+ /* consistent state) if f returns TRUE. */
+GC_bool GC_block_empty GC_PROTO((hdr * hhdr));
+ /* Block completely unmarked? */
+GC_bool GC_never_stop_func GC_PROTO((void));
+ /* Returns FALSE. */
+GC_bool GC_try_to_collect_inner GC_PROTO((GC_stop_func f));
+
+ /* Collect; caller must have acquired */
+ /* lock and disabled signals. */
+ /* Collection is aborted if f returns */
+ /* TRUE. Returns TRUE if it completes */
+ /* successfully. */
+# define GC_gcollect_inner() \
+ (void) GC_try_to_collect_inner(GC_never_stop_func)
+void GC_finish_collection GC_PROTO((void));
+ /* Finish collection. Mark bits are */
+ /* consistent and lock is still held. */
+GC_bool GC_collect_or_expand GC_PROTO(( \
+ word needed_blocks, GC_bool ignore_off_page));
+ /* Collect or expand heap in an attempt */
+ /* make the indicated number of free */
+ /* blocks available. Should be called */
+ /* until the blocks are available or */
+ /* until it fails by returning FALSE. */
+
+extern GC_bool GC_is_initialized; /* GC_init() has been run. */
+
+#if defined(MSWIN32) || defined(MSWINCE)
+ void GC_deinit GC_PROTO((void));
+ /* Free any resources allocated by */
+ /* GC_init */
+#endif
+
+void GC_collect_a_little_inner GC_PROTO((int n));
+ /* Do n units worth of garbage */
+ /* collection work, if appropriate. */
+ /* A unit is an amount appropriate for */
+ /* HBLKSIZE bytes of allocation. */
+/* ptr_t GC_generic_malloc GC_PROTO((word lb, int k)); */
+ /* Allocate an object of the given */
+ /* kind. By default, there are only */
+ /* a few kinds: composite(pointerfree), */
+ /* atomic, uncollectable, etc. */
+ /* We claim it's possible for clever */
+ /* client code that understands GC */
+ /* internals to add more, e.g. to */
+ /* communicate object layout info */
+ /* to the collector. */
+ /* The actual decl is in gc_mark.h. */
+ptr_t GC_generic_malloc_ignore_off_page GC_PROTO((size_t b, int k));
+ /* As above, but pointers past the */
+ /* first page of the resulting object */
+ /* are ignored. */
+ptr_t GC_generic_malloc_inner GC_PROTO((word lb, int k));
+ /* Ditto, but I already hold lock, etc. */
+ptr_t GC_generic_malloc_words_small_inner GC_PROTO((word lw, int k));
+ /* Analogous to the above, but assumes */
+ /* a small object size, and bypasses */
+ /* MERGE_SIZES mechanism. */
+ptr_t GC_generic_malloc_words_small GC_PROTO((size_t lw, int k));
+ /* As above, but size in units of words */
+ /* Bypasses MERGE_SIZES. Assumes */
+ /* words <= MAXOBJSZ. */
+ptr_t GC_generic_malloc_inner_ignore_off_page GC_PROTO((size_t lb, int k));
+ /* Allocate an object, where */
+ /* the client guarantees that there */
+ /* will always be a pointer to the */
+ /* beginning of the object while the */
+ /* object is live. */
+ptr_t GC_allocobj GC_PROTO((word sz, int kind));
+ /* Make the indicated */
+ /* free list nonempty, and return its */
+ /* head. */
+
+void GC_free_inner(GC_PTR p);
+
+void GC_init_headers GC_PROTO((void));
+struct hblkhdr * GC_install_header GC_PROTO((struct hblk *h));
+ /* Install a header for block h. */
+ /* Return 0 on failure, or the header */
+ /* otherwise. */
+GC_bool GC_install_counts GC_PROTO((struct hblk * h, word sz));
+ /* Set up forwarding counts for block */
+ /* h of size sz. */
+ /* Return FALSE on failure. */
+void GC_remove_header GC_PROTO((struct hblk * h));
+ /* Remove the header for block h. */
+void GC_remove_counts GC_PROTO((struct hblk * h, word sz));
+ /* Remove forwarding counts for h. */
+hdr * GC_find_header GC_PROTO((ptr_t h)); /* Debugging only. */
+
+void GC_finalize GC_PROTO((void));
+ /* Perform all indicated finalization actions */
+ /* on unmarked objects. */
+ /* Unreachable finalizable objects are enqueued */
+ /* for processing by GC_invoke_finalizers. */
+ /* Invoked with lock. */
+
+void GC_notify_or_invoke_finalizers GC_PROTO((void));
+ /* If GC_finalize_on_demand is not set, invoke */
+ /* eligible finalizers. Otherwise: */
+ /* Call *GC_finalizer_notifier if there are */
+ /* finalizers to be run, and we haven't called */
+ /* this procedure yet this GC cycle. */
+
+GC_API GC_PTR GC_make_closure GC_PROTO((GC_finalization_proc fn, GC_PTR data));
+GC_API void GC_debug_invoke_finalizer GC_PROTO((GC_PTR obj, GC_PTR data));
+ /* Auxiliary fns to make finalization work */
+ /* correctly with displaced pointers introduced */
+ /* by the debugging allocators. */
+
+void GC_add_to_heap GC_PROTO((struct hblk *p, word bytes));
+ /* Add a HBLKSIZE aligned chunk to the heap. */
+
+void GC_print_obj GC_PROTO((ptr_t p));
+ /* P points to somewhere inside an object with */
+ /* debugging info. Print a human readable */
+ /* description of the object to stderr. */
+extern void (*GC_check_heap) GC_PROTO((void));
+ /* Check that all objects in the heap with */
+ /* debugging info are intact. */
+ /* Add any that are not to GC_smashed list. */
+extern void (*GC_print_all_smashed) GC_PROTO((void));
+ /* Print GC_smashed if it's not empty. */
+ /* Clear GC_smashed list. */
+extern void GC_print_all_errors GC_PROTO((void));
+ /* Print smashed and leaked objects, if any. */
+ /* Clear the lists of such objects. */
+extern void (*GC_print_heap_obj) GC_PROTO((ptr_t p));
+ /* If possible print s followed by a more */
+ /* detailed description of the object */
+ /* referred to by p. */
+#if defined(LINUX) && defined(__ELF__) && !defined(SMALL_CONFIG)
+ void GC_print_address_map GC_PROTO((void));
+ /* Print an address map of the process. */
+#endif
+
+extern GC_bool GC_have_errors; /* We saw a smashed or leaked object. */
+ /* Call error printing routine */
+ /* occasionally. */
+extern GC_bool GC_print_stats; /* Produce at least some logging output */
+ /* Set from environment variable. */
+
+#ifndef NO_DEBUGGING
+ extern GC_bool GC_dump_regularly; /* Generate regular debugging dumps. */
+# define COND_DUMP if (GC_dump_regularly) GC_dump();
+#else
+# define COND_DUMP
+#endif
+
+#ifdef KEEP_BACK_PTRS
+ extern long GC_backtraces;
+ void GC_generate_random_backtrace_no_gc(void);
+#endif
+
+extern GC_bool GC_print_back_height;
+
+#ifdef MAKE_BACK_GRAPH
+ void GC_print_back_graph_stats(void);
+#endif
+
+/* Macros used for collector internal allocation. */
+/* These assume the collector lock is held. */
+#ifdef DBG_HDRS_ALL
+ extern GC_PTR GC_debug_generic_malloc_inner(size_t lb, int k);
+ extern GC_PTR GC_debug_generic_malloc_inner_ignore_off_page(size_t lb,
+ int k);
+# define GC_INTERNAL_MALLOC GC_debug_generic_malloc_inner
+# define GC_INTERNAL_MALLOC_IGNORE_OFF_PAGE \
+ GC_debug_generic_malloc_inner_ignore_off_page
+# ifdef THREADS
+# define GC_INTERNAL_FREE GC_debug_free_inner
+# else
+# define GC_INTERNAL_FREE GC_debug_free
+# endif
+#else
+# define GC_INTERNAL_MALLOC GC_generic_malloc_inner
+# define GC_INTERNAL_MALLOC_IGNORE_OFF_PAGE \
+ GC_generic_malloc_inner_ignore_off_page
+# ifdef THREADS
+# define GC_INTERNAL_FREE GC_free_inner
+# else
+# define GC_INTERNAL_FREE GC_free
+# endif
+#endif
+
+/* Memory unmapping: */
+#ifdef USE_MUNMAP
+ void GC_unmap_old(void);
+ void GC_merge_unmapped(void);
+ void GC_unmap(ptr_t start, word bytes);
+ void GC_remap(ptr_t start, word bytes);
+ void GC_unmap_gap(ptr_t start1, word bytes1, ptr_t start2, word bytes2);
+#endif
+
+/* Virtual dirty bit implementation: */
+/* Each implementation exports the following: */
+void GC_read_dirty GC_PROTO((void));
+ /* Retrieve dirty bits. */
+GC_bool GC_page_was_dirty GC_PROTO((struct hblk *h));
+ /* Read retrieved dirty bits. */
+GC_bool GC_page_was_ever_dirty GC_PROTO((struct hblk *h));
+ /* Could the page contain valid heap pointers? */
+void GC_is_fresh GC_PROTO((struct hblk *h, word n));
+ /* Assert the region currently contains no */
+ /* valid pointers. */
+void GC_remove_protection GC_PROTO((struct hblk *h, word nblocks,
+ GC_bool pointerfree));
+ /* h is about to be writteni or allocated. Ensure */
+ /* that it's not write protected by the virtual */
+ /* dirty bit implementation. */
+
+void GC_dirty_init GC_PROTO((void));
+
+/* Slow/general mark bit manipulation: */
+GC_API GC_bool GC_is_marked GC_PROTO((ptr_t p));
+void GC_clear_mark_bit GC_PROTO((ptr_t p));
+void GC_set_mark_bit GC_PROTO((ptr_t p));
+
+/* Stubborn objects: */
+void GC_read_changed GC_PROTO((void)); /* Analogous to GC_read_dirty */
+GC_bool GC_page_was_changed GC_PROTO((struct hblk * h));
+ /* Analogous to GC_page_was_dirty */
+void GC_clean_changing_list GC_PROTO((void));
+ /* Collect obsolete changing list entries */
+void GC_stubborn_init GC_PROTO((void));
+
+/* Debugging print routines: */
+void GC_print_block_list GC_PROTO((void));
+void GC_print_hblkfreelist GC_PROTO((void));
+void GC_print_heap_sects GC_PROTO((void));
+void GC_print_static_roots GC_PROTO((void));
+void GC_print_finalization_stats GC_PROTO((void));
+void GC_dump GC_PROTO((void));
+
+#ifdef KEEP_BACK_PTRS
+ void GC_store_back_pointer(ptr_t source, ptr_t dest);
+ void GC_marked_for_finalization(ptr_t dest);
+# define GC_STORE_BACK_PTR(source, dest) GC_store_back_pointer(source, dest)
+# define GC_MARKED_FOR_FINALIZATION(dest) GC_marked_for_finalization(dest)
+#else
+# define GC_STORE_BACK_PTR(source, dest)
+# define GC_MARKED_FOR_FINALIZATION(dest)
+#endif
+
+/* Make arguments appear live to compiler */
+# ifdef __WATCOMC__
+ void GC_noop(void*, ...);
+# else
+# ifdef __DMC__
+ GC_API void GC_noop(...);
+# else
+ GC_API void GC_noop();
+# endif
+# endif
+
+void GC_noop1 GC_PROTO((word));
+
+/* Logging and diagnostic output: */
+GC_API void GC_printf GC_PROTO((GC_CONST char * format, long, long, long, long, long, long));
+ /* A version of printf that doesn't allocate, */
+ /* is restricted to long arguments, and */
+ /* (unfortunately) doesn't use varargs for */
+ /* portability. Restricted to 6 args and */
+ /* 1K total output length. */
+ /* (We use sprintf. Hopefully that doesn't */
+ /* allocate for long arguments.) */
+# define GC_printf0(f) GC_printf(f, 0l, 0l, 0l, 0l, 0l, 0l)
+# define GC_printf1(f,a) GC_printf(f, (long)a, 0l, 0l, 0l, 0l, 0l)
+# define GC_printf2(f,a,b) GC_printf(f, (long)a, (long)b, 0l, 0l, 0l, 0l)
+# define GC_printf3(f,a,b,c) GC_printf(f, (long)a, (long)b, (long)c, 0l, 0l, 0l)
+# define GC_printf4(f,a,b,c,d) GC_printf(f, (long)a, (long)b, (long)c, \
+ (long)d, 0l, 0l)
+# define GC_printf5(f,a,b,c,d,e) GC_printf(f, (long)a, (long)b, (long)c, \
+ (long)d, (long)e, 0l)
+# define GC_printf6(f,a,b,c,d,e,g) GC_printf(f, (long)a, (long)b, (long)c, \
+ (long)d, (long)e, (long)g)
+
+GC_API void GC_err_printf GC_PROTO((GC_CONST char * format, long, long, long, long, long, long));
+# define GC_err_printf0(f) GC_err_puts(f)
+# define GC_err_printf1(f,a) GC_err_printf(f, (long)a, 0l, 0l, 0l, 0l, 0l)
+# define GC_err_printf2(f,a,b) GC_err_printf(f, (long)a, (long)b, 0l, 0l, 0l, 0l)
+# define GC_err_printf3(f,a,b,c) GC_err_printf(f, (long)a, (long)b, (long)c, \
+ 0l, 0l, 0l)
+# define GC_err_printf4(f,a,b,c,d) GC_err_printf(f, (long)a, (long)b, \
+ (long)c, (long)d, 0l, 0l)
+# define GC_err_printf5(f,a,b,c,d,e) GC_err_printf(f, (long)a, (long)b, \
+ (long)c, (long)d, \
+ (long)e, 0l)
+# define GC_err_printf6(f,a,b,c,d,e,g) GC_err_printf(f, (long)a, (long)b, \
+ (long)c, (long)d, \
+ (long)e, (long)g)
+ /* Ditto, writes to stderr. */
+
+void GC_err_puts GC_PROTO((GC_CONST char *s));
+ /* Write s to stderr, don't buffer, don't add */
+ /* newlines, don't ... */
+
+#if defined(LINUX) && !defined(SMALL_CONFIG)
+ void GC_err_write GC_PROTO((GC_CONST char *buf, size_t len));
+ /* Write buf to stderr, don't buffer, don't add */
+ /* newlines, don't ... */
+#endif
+
+
+# ifdef GC_ASSERTIONS
+# define GC_ASSERT(expr) if(!(expr)) {\
+ GC_err_printf2("Assertion failure: %s:%ld\n", \
+ __FILE__, (unsigned long)__LINE__); \
+ ABORT("assertion failure"); }
+# else
+# define GC_ASSERT(expr)
+# endif
+
+/* Check a compile time assertion at compile time. The error */
+/* message for failure is a bit baroque, but ... */
+#if defined(mips) && !defined(__GNUC__)
+/* DOB: MIPSPro C gets an internal error taking the sizeof an array type.
+ This code works correctly (ugliness is to avoid "unused var" warnings) */
+# define GC_STATIC_ASSERT(expr) do { if (0) { char j[(expr)? 1 : -1]; j[0]='\0'; j[0]=j[0]; } } while(0)
+#else
+# define GC_STATIC_ASSERT(expr) sizeof(char[(expr)? 1 : -1])
+#endif
+
+# if defined(PARALLEL_MARK) || defined(THREAD_LOCAL_ALLOC)
+ /* We need additional synchronization facilities from the thread */
+ /* support. We believe these are less performance critical */
+ /* than the main garbage collector lock; standard pthreads-based */
+ /* implementations should be sufficient. */
+
+ /* The mark lock and condition variable. If the GC lock is also */
+ /* acquired, the GC lock must be acquired first. The mark lock is */
+ /* used to both protect some variables used by the parallel */
+ /* marker, and to protect GC_fl_builder_count, below. */
+ /* GC_notify_all_marker() is called when */
+ /* the state of the parallel marker changes */
+ /* in some significant way (see gc_mark.h for details). The */
+ /* latter set of events includes incrementing GC_mark_no. */
+ /* GC_notify_all_builder() is called when GC_fl_builder_count */
+ /* reaches 0. */
+
+ extern void GC_acquire_mark_lock();
+ extern void GC_release_mark_lock();
+ extern void GC_notify_all_builder();
+ /* extern void GC_wait_builder(); */
+ extern void GC_wait_for_reclaim();
+
+ extern word GC_fl_builder_count; /* Protected by mark lock. */
+# endif /* PARALLEL_MARK || THREAD_LOCAL_ALLOC */
+# ifdef PARALLEL_MARK
+ extern void GC_notify_all_marker();
+ extern void GC_wait_marker();
+ extern word GC_mark_no; /* Protected by mark lock. */
+
+ extern void GC_help_marker(word my_mark_no);
+ /* Try to help out parallel marker for mark cycle */
+ /* my_mark_no. Returns if the mark cycle finishes or */
+ /* was already done, or there was nothing to do for */
+ /* some other reason. */
+# endif /* PARALLEL_MARK */
+
+# if defined(GC_PTHREADS) && !defined(GC_SOLARIS_THREADS)
+ /* We define the thread suspension signal here, so that we can refer */
+ /* to it in the dirty bit implementation, if necessary. Ideally we */
+ /* would allocate a (real-time ?) signal using the standard mechanism.*/
+ /* unfortunately, there is no standard mechanism. (There is one */
+ /* in Linux glibc, but it's not exported.) Thus we continue to use */
+ /* the same hard-coded signals we've always used. */
+# if !defined(SIG_SUSPEND)
+# if defined(GC_LINUX_THREADS) || defined(GC_DGUX386_THREADS)
+# if defined(SPARC) && !defined(SIGPWR)
+ /* SPARC/Linux doesn't properly define SIGPWR in <signal.h>.
+ * It is aliased to SIGLOST in asm/signal.h, though. */
+# define SIG_SUSPEND SIGLOST
+# else
+ /* Linuxthreads itself uses SIGUSR1 and SIGUSR2. */
+# define SIG_SUSPEND SIGPWR
+# endif
+# else /* !GC_LINUX_THREADS */
+# if defined(_SIGRTMIN)
+# define SIG_SUSPEND _SIGRTMIN + 6
+# else
+# define SIG_SUSPEND SIGRTMIN + 6
+# endif
+# endif
+# endif /* !SIG_SUSPEND */
+
+# endif
+
+# endif /* GC_PRIVATE_H */
Added: llvm-gcc-4.2/trunk/boehm-gc/include/private/gcconfig.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/private/gcconfig.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/private/gcconfig.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/private/gcconfig.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,2384 @@
+/*
+ * Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
+ * Copyright (c) 1991-1994 by Xerox Corporation. All rights reserved.
+ * Copyright (c) 1996 by Silicon Graphics. All rights reserved.
+ * Copyright (c) 2000-2004 Hewlett-Packard Development Company, L.P.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ */
+
+/*
+ * This header is private to the gc. It is almost always included from
+ * gc_priv.h. However it is possible to include it by itself if just the
+ * configuration macros are needed. In that
+ * case, a few declarations relying on types declared in gc_priv.h will be
+ * omitted.
+ */
+
+#ifndef GCCONFIG_H
+
+# define GCCONFIG_H
+
+# ifndef GC_PRIVATE_H
+ /* Fake ptr_t declaration, just to avoid compilation errors. */
+ /* This avoids many instances if "ifndef GC_PRIVATE_H" below. */
+ typedef struct GC_undefined_struct * ptr_t;
+# endif
+
+/* Machine dependent parameters. Some tuning parameters can be found */
+/* near the top of gc_private.h. */
+
+/* Machine specific parts contributed by various people. See README file. */
+
+/* First a unified test for Linux: */
+# if defined(linux) || defined(__linux__)
+# ifndef LINUX
+# define LINUX
+# endif
+# endif
+
+/* And one for NetBSD: */
+# if defined(__NetBSD__)
+# define NETBSD
+# endif
+
+/* And one for OpenBSD: */
+# if defined(__OpenBSD__)
+# define OPENBSD
+# endif
+
+/* And one for FreeBSD: */
+# if ( defined(__FreeBSD__) || defined(__FreeBSD_kernel__) ) && !defined(FREEBSD)
+# define FREEBSD
+# endif
+
+/* Determine the machine type: */
+# if defined(__arm__) || defined(__thumb__)
+# define ARM32
+# if !defined(LINUX) && !defined(NETBSD)
+# define NOSYS
+# define mach_type_known
+# endif
+# endif
+# if defined(sun) && defined(mc68000)
+# define M68K
+# define SUNOS4
+# define mach_type_known
+# endif
+# if defined(hp9000s300)
+# define M68K
+# define HP
+# define mach_type_known
+# endif
+# if defined(OPENBSD) && defined(m68k)
+# define M68K
+# define mach_type_known
+# endif
+# if defined(OPENBSD) && defined(__sparc__)
+# define SPARC
+# define mach_type_known
+# endif
+# if defined(NETBSD) && (defined(m68k) || defined(__m68k__))
+# define M68K
+# define mach_type_known
+# endif
+# if defined(NETBSD) && defined(__powerpc__)
+# define POWERPC
+# define mach_type_known
+# endif
+# if defined(NETBSD) && (defined(__arm32__) || defined(__arm__))
+# define ARM32
+# define mach_type_known
+# endif
+# if defined(NETBSD) && defined(__sh__)
+# define SH
+# define mach_type_known
+# endif
+# if defined(vax)
+# define VAX
+# ifdef ultrix
+# define ULTRIX
+# else
+# define BSD
+# endif
+# define mach_type_known
+# endif
+# if defined(__NetBSD__) && defined(__vax__)
+# define VAX
+# define mach_type_known
+# endif
+# if defined(mips) || defined(__mips) || defined(_mips)
+# define MIPS
+# if defined(nec_ews) || defined(_nec_ews)
+# define EWS4800
+# endif
+# if !defined(LINUX) && !defined(EWS4800) && !defined(NETBSD)
+# if defined(ultrix) || defined(__ultrix)
+# define ULTRIX
+# else
+# if defined(_SYSTYPE_SVR4) || defined(SYSTYPE_SVR4) \
+ || defined(__SYSTYPE_SVR4__)
+# define IRIX5 /* or IRIX 6.X */
+# else
+# define RISCOS /* or IRIX 4.X */
+# endif
+# endif
+# endif /* !LINUX */
+# if defined(__NetBSD__) && defined(__MIPSEL__)
+# undef ULTRIX
+# endif
+# define mach_type_known
+# endif
+# if defined(DGUX) && (defined(i386) || defined(__i386__))
+# define I386
+# ifndef _USING_DGUX
+# define _USING_DGUX
+# endif
+# define mach_type_known
+# endif
+# if defined(sequent) && (defined(i386) || defined(__i386__))
+# define I386
+# define SEQUENT
+# define mach_type_known
+# endif
+# if defined(sun) && (defined(i386) || defined(__i386__))
+# define I386
+# define SUNOS5
+# define mach_type_known
+# endif
+# if (defined(__OS2__) || defined(__EMX__)) && defined(__32BIT__)
+# define I386
+# define OS2
+# define mach_type_known
+# endif
+# if defined(ibm032)
+# define RT
+# define mach_type_known
+# endif
+# if defined(sun) && (defined(sparc) || defined(__sparc))
+# define SPARC
+ /* Test for SunOS 5.x */
+# include <errno.h>
+# ifdef ECHRNG
+# define SUNOS5
+# else
+# define SUNOS4
+# endif
+# define mach_type_known
+# endif
+# if defined(sparc) && defined(unix) && !defined(sun) && !defined(linux) \
+ && !defined(__OpenBSD__) && !defined(__NetBSD__) && !defined(__FreeBSD__)
+# define SPARC
+# define DRSNX
+# define mach_type_known
+# endif
+# if defined(_IBMR2)
+# define RS6000
+# define mach_type_known
+# endif
+# if defined(__NetBSD__) && defined(__sparc__)
+# define SPARC
+# define mach_type_known
+# endif
+# if defined(_M_XENIX) && defined(_M_SYSV) && defined(_M_I386)
+ /* The above test may need refinement */
+# define I386
+# if defined(_SCO_ELF)
+# define SCO_ELF
+# else
+# define SCO
+# endif
+# define mach_type_known
+# endif
+# if defined(_AUX_SOURCE)
+# define M68K
+# define SYSV
+# define mach_type_known
+# endif
+# if defined(_PA_RISC1_0) || defined(_PA_RISC1_1) || defined(_PA_RISC2_0) \
+ || defined(hppa) || defined(__hppa__)
+# define HP_PA
+# if !defined(LINUX) && !defined(HPUX)
+# define HPUX
+# endif
+# define mach_type_known
+# endif
+# if defined(__ia64) && defined(_HPUX_SOURCE)
+# define IA64
+# ifndef HPUX
+# define HPUX
+# endif
+# define mach_type_known
+# endif
+# if defined(__BEOS__) && defined(_X86_)
+# define I386
+# define BEOS
+# define mach_type_known
+# endif
+# if defined(LINUX) && (defined(i386) || defined(__i386__))
+# define I386
+# define mach_type_known
+# endif
+# if defined(LINUX) && defined(__x86_64__)
+# define X86_64
+# define mach_type_known
+# endif
+# if defined(LINUX) && (defined(__ia64__) || defined(__ia64))
+# define IA64
+# define mach_type_known
+# endif
+# if defined(LINUX) && defined(__arm__)
+# define ARM32
+# define mach_type_known
+# endif
+# if defined(LINUX) && defined(__cris__)
+# ifndef CRIS
+# define CRIS
+# endif
+# define mach_type_known
+# endif
+# if defined(LINUX) && (defined(powerpc) || defined(__powerpc__) || \
+ defined(powerpc64) || defined(__powerpc64__))
+# define POWERPC
+# define mach_type_known
+# endif
+# if defined(FREEBSD) && (defined(powerpc) || defined(__powerpc__))
+# define POWERPC
+# define mach_type_known
+# endif
+# if defined(LINUX) && defined(__mc68000__)
+# define M68K
+# define mach_type_known
+# endif
+# if defined(LINUX) && (defined(sparc) || defined(__sparc__))
+# define SPARC
+# define mach_type_known
+# endif
+# if defined(LINUX) && defined(__arm__)
+# define ARM32
+# define mach_type_known
+# endif
+# if defined(LINUX) && defined(__sh__)
+# define SH
+# define mach_type_known
+# endif
+# if defined(LINUX) && defined(__m32r__)
+# define M32R
+# define mach_type_known
+# endif
+# if defined(__alpha) || defined(__alpha__)
+# define ALPHA
+# if !defined(LINUX) && !defined(NETBSD) && !defined(OPENBSD) && !defined(FREEBSD)
+# define OSF1 /* a.k.a Digital Unix */
+# endif
+# define mach_type_known
+# endif
+# if defined(_AMIGA) && !defined(AMIGA)
+# define AMIGA
+# endif
+# ifdef AMIGA
+# define M68K
+# define mach_type_known
+# endif
+# if defined(THINK_C) || defined(__MWERKS__) && !defined(__powerc)
+# define M68K
+# define MACOS
+# define mach_type_known
+# endif
+# if defined(__MWERKS__) && defined(__powerc) && !defined(__MACH__)
+# define POWERPC
+# define MACOS
+# define mach_type_known
+# endif
+# if defined(macosx) || (defined(__APPLE__) && defined(__MACH__))
+# define DARWIN
+# if defined(__ppc__) || defined(__ppc64__)
+# define POWERPC
+# define mach_type_known
+# elif defined(__x86_64__)
+# define X86_64
+# define mach_type_known
+# elif defined(__i386__)
+# define I386
+# define mach_type_known
+# endif
+# endif
+# if defined(NeXT) && defined(mc68000)
+# define M68K
+# define NEXT
+# define mach_type_known
+# endif
+# if defined(NeXT) && (defined(i386) || defined(__i386__))
+# define I386
+# define NEXT
+# define mach_type_known
+# endif
+# if defined(__OpenBSD__) && (defined(i386) || defined(__i386__))
+# define I386
+# define OPENBSD
+# define mach_type_known
+# endif
+# if defined(FREEBSD) && (defined(i386) || defined(__i386__))
+# define I386
+# define mach_type_known
+# endif
+# if defined(FREEBSD) && defined(__x86_64__)
+# define X86_64
+# define mach_type_known
+# endif
+# if defined(__NetBSD__) && (defined(i386) || defined(__i386__))
+# define I386
+# define mach_type_known
+# endif
+# if defined(__NetBSD__) && defined(__x86_64__)
+# define X86_64
+# define mach_type_known
+# endif
+# if defined(FREEBSD) && defined(__sparc__)
+# define SPARC
+# define mach_type_known
+#endif
+# if defined(bsdi) && (defined(i386) || defined(__i386__))
+# define I386
+# define BSDI
+# define mach_type_known
+# endif
+# if !defined(mach_type_known) && defined(__386BSD__)
+# define I386
+# define THREE86BSD
+# define mach_type_known
+# endif
+# if defined(_CX_UX) && defined(_M88K)
+# define M88K
+# define CX_UX
+# define mach_type_known
+# endif
+# if defined(DGUX) && defined(m88k)
+# define M88K
+ /* DGUX defined */
+# define mach_type_known
+# endif
+# if defined(_WIN32_WCE)
+ /* SH3, SH4, MIPS already defined for corresponding architectures */
+# if defined(SH3) || defined(SH4)
+# define SH
+# endif
+# if defined(x86)
+# define I386
+# endif
+# if defined(ARM)
+# define ARM32
+# endif
+# define MSWINCE
+# define mach_type_known
+# else
+# if (defined(_MSDOS) || defined(_MSC_VER)) && (_M_IX86 >= 300) \
+ || defined(_WIN32) && !defined(__CYGWIN32__) && !defined(__CYGWIN__)
+# define I386
+# define MSWIN32 /* or Win32s */
+# define mach_type_known
+# endif
+# if defined(_MSC_VER) && defined(_M_IA64)
+# define IA64
+# define MSWIN32 /* Really win64, but we don't treat 64-bit */
+ /* variants as a differnt platform. */
+# endif
+# endif
+# if defined(__DJGPP__)
+# define I386
+# ifndef DJGPP
+# define DJGPP /* MSDOS running the DJGPP port of GCC */
+# endif
+# define mach_type_known
+# endif
+# if defined(__CYGWIN32__) || defined(__CYGWIN__)
+# define I386
+# define CYGWIN32
+# define mach_type_known
+# endif
+# if defined(__MINGW32__)
+# define I386
+# define MSWIN32
+# define mach_type_known
+# endif
+# if defined(__BORLANDC__)
+# define I386
+# define MSWIN32
+# define mach_type_known
+# endif
+# if defined(_UTS) && !defined(mach_type_known)
+# define S370
+# define UTS4
+# define mach_type_known
+# endif
+# if defined(__pj__)
+# define PJ
+# define mach_type_known
+# endif
+# if defined(__embedded__) && defined(PPC)
+# define POWERPC
+# define NOSYS
+# define mach_type_known
+# endif
+/* Ivan Demakov */
+# if defined(__WATCOMC__) && defined(__386__)
+# define I386
+# if !defined(OS2) && !defined(MSWIN32) && !defined(DOS4GW)
+# if defined(__OS2__)
+# define OS2
+# else
+# if defined(__WINDOWS_386__) || defined(__NT__)
+# define MSWIN32
+# else
+# define DOS4GW
+# endif
+# endif
+# endif
+# define mach_type_known
+# endif
+# if defined(__s390__) && defined(LINUX)
+# define S390
+# define mach_type_known
+# endif
+# if defined(__GNU__)
+# if defined(__i386__)
+/* The Debian Hurd running on generic PC */
+# define HURD
+# define I386
+# define mach_type_known
+# endif
+# endif
+
+/* Feel free to add more clauses here */
+
+/* Or manually define the machine type here. A machine type is */
+/* characterized by the architecture. Some */
+/* machine types are further subdivided by OS. */
+/* the macros ULTRIX, RISCOS, and BSD to distinguish. */
+/* Note that SGI IRIX is treated identically to RISCOS. */
+/* SYSV on an M68K actually means A/UX. */
+/* The distinction in these cases is usually the stack starting address */
+# ifndef mach_type_known
+ --> unknown machine type
+# endif
+ /* Mapping is: M68K ==> Motorola 680X0 */
+ /* (SUNOS4,HP,NEXT, and SYSV (A/UX), */
+ /* MACOS and AMIGA variants) */
+ /* I386 ==> Intel 386 */
+ /* (SEQUENT, OS2, SCO, LINUX, NETBSD, */
+ /* FREEBSD, THREE86BSD, MSWIN32, */
+ /* BSDI,SUNOS5, NEXT, other variants) */
+ /* NS32K ==> Encore Multimax */
+ /* MIPS ==> R2000 or R3000 */
+ /* (RISCOS, ULTRIX variants) */
+ /* VAX ==> DEC VAX */
+ /* (BSD, ULTRIX variants) */
+ /* RS6000 ==> IBM RS/6000 AIX3.X */
+ /* RT ==> IBM PC/RT */
+ /* HP_PA ==> HP9000/700 & /800 */
+ /* HP/UX, LINUX */
+ /* SPARC ==> SPARC v7/v8/v9 */
+ /* (SUNOS4, SUNOS5, LINUX, */
+ /* DRSNX variants) */
+ /* ALPHA ==> DEC Alpha */
+ /* (OSF1 and LINUX variants) */
+ /* M88K ==> Motorola 88XX0 */
+ /* (CX_UX and DGUX) */
+ /* S370 ==> 370-like machine */
+ /* running Amdahl UTS4 */
+ /* S390 ==> 390-like machine */
+ /* running LINUX */
+ /* ARM32 ==> Intel StrongARM */
+ /* IA64 ==> Intel IPF */
+ /* (e.g. Itanium) */
+ /* (LINUX and HPUX) */
+ /* SH ==> Hitachi SuperH */
+ /* (LINUX & MSWINCE) */
+ /* X86_64 ==> AMD x86-64 */
+ /* POWERPC ==> IBM/Apple PowerPC */
+ /* (MACOS(<=9),DARWIN(incl.MACOSX),*/
+ /* LINUX, NETBSD, NOSYS variants) */
+ /* Handles 32 and 64-bit variants. */
+ /* AIX should be handled here, but */
+ /* that's called an RS6000. */
+ /* CRIS ==> Axis Etrax */
+ /* M32R ==> Renesas M32R */
+
+
+/*
+ * For each architecture and OS, the following need to be defined:
+ *
+ * CPP_WORDSZ is a simple integer constant representing the word size.
+ * in bits. We assume byte addressibility, where a byte has 8 bits.
+ * We also assume CPP_WORDSZ is either 32 or 64.
+ * (We care about the length of pointers, not hardware
+ * bus widths. Thus a 64 bit processor with a C compiler that uses
+ * 32 bit pointers should use CPP_WORDSZ of 32, not 64. Default is 32.)
+ *
+ * MACH_TYPE is a string representation of the machine type.
+ * OS_TYPE is analogous for the OS.
+ *
+ * ALIGNMENT is the largest N, such that
+ * all pointer are guaranteed to be aligned on N byte boundaries.
+ * defining it to be 1 will always work, but perform poorly.
+ *
+ * DATASTART is the beginning of the data segment.
+ * On some platforms SEARCH_FOR_DATA_START is defined.
+ * SEARCH_FOR_DATASTART will cause GC_data_start to
+ * be set to an address determined by accessing data backwards from _end
+ * until an unmapped page is found. DATASTART will be defined to be
+ * GC_data_start.
+ * On UNIX-like systems, the collector will scan the area between DATASTART
+ * and DATAEND for root pointers.
+ *
+ * DATAEND, if not `end' where `end' is defined as ``extern int end[];''.
+ * RTH suggests gaining access to linker script synth'd values with
+ * this idiom instead of `&end' where `end' is defined as ``extern int end;'' .
+ * Otherwise, ``GCC will assume these are in .sdata/.sbss'' and it will, e.g.,
+ * cause failures on alpha*-*-* with ``-msmall-data or -fpic'' or mips-*-*
+ * without any special options.
+ *
+ * ALIGN_DOUBLE of GC_malloc should return blocks aligned to twice
+ * the pointer size.
+ *
+ * STACKBOTTOM is the cool end of the stack, which is usually the
+ * highest address in the stack.
+ * Under PCR or OS/2, we have other ways of finding thread stacks.
+ * For each machine, the following should:
+ * 1) define STACK_GROWS_UP if the stack grows toward higher addresses, and
+ * 2) define exactly one of
+ * STACKBOTTOM (should be defined to be an expression)
+ * LINUX_STACKBOTTOM
+ * HEURISTIC1
+ * HEURISTIC2
+ * If STACKBOTTOM is defined, then it's value will be used directly as the
+ * stack base. If LINUX_STACKBOTTOM is defined, then it will be determined
+ * with a method appropriate for most Linux systems. Currently we look
+ * first for __libc_stack_end, and if that fails read it from /proc.
+ * If either of the last two macros are defined, then STACKBOTTOM is computed
+ * during collector startup using one of the following two heuristics:
+ * HEURISTIC1: Take an address inside GC_init's frame, and round it up to
+ * the next multiple of STACK_GRAN.
+ * HEURISTIC2: Take an address inside GC_init's frame, increment it repeatedly
+ * in small steps (decrement if STACK_GROWS_UP), and read the value
+ * at each location. Remember the value when the first
+ * Segmentation violation or Bus error is signalled. Round that
+ * to the nearest plausible page boundary, and use that instead
+ * of STACKBOTTOM.
+ *
+ * Gustavo Rodriguez-Rivera points out that on most (all?) Unix machines,
+ * the value of environ is a pointer that can serve as STACKBOTTOM.
+ * I expect that HEURISTIC2 can be replaced by this approach, which
+ * interferes far less with debugging. However it has the disadvantage
+ * that it's confused by a putenv call before the collector is initialized.
+ * This could be dealt with by intercepting putenv ...
+ *
+ * If no expression for STACKBOTTOM can be found, and neither of the above
+ * heuristics are usable, the collector can still be used with all of the above
+ * undefined, provided one of the following is done:
+ * 1) GC_mark_roots can be changed to somehow mark from the correct stack(s)
+ * without reference to STACKBOTTOM. This is appropriate for use in
+ * conjunction with thread packages, since there will be multiple stacks.
+ * (Allocating thread stacks in the heap, and treating them as ordinary
+ * heap data objects is also possible as a last resort. However, this is
+ * likely to introduce significant amounts of excess storage retention
+ * unless the dead parts of the thread stacks are periodically cleared.)
+ * 2) Client code may set GC_stackbottom before calling any GC_ routines.
+ * If the author of the client code controls the main program, this is
+ * easily accomplished by introducing a new main program, setting
+ * GC_stackbottom to the address of a local variable, and then calling
+ * the original main program. The new main program would read something
+ * like:
+ *
+ * # include "gc_private.h"
+ *
+ * main(argc, argv, envp)
+ * int argc;
+ * char **argv, **envp;
+ * {
+ * int dummy;
+ *
+ * GC_stackbottom = (ptr_t)(&dummy);
+ * return(real_main(argc, argv, envp));
+ * }
+ *
+ *
+ * Each architecture may also define the style of virtual dirty bit
+ * implementation to be used:
+ * MPROTECT_VDB: Write protect the heap and catch faults.
+ * PROC_VDB: Use the SVR4 /proc primitives to read dirty bits.
+ *
+ * An architecture may define DYNAMIC_LOADING if dynamic_load.c
+ * defined GC_register_dynamic_libraries() for the architecture.
+ *
+ * An architecture may define PREFETCH(x) to preload the cache with *x.
+ * This defaults to a no-op.
+ *
+ * PREFETCH_FOR_WRITE(x) is used if *x is about to be written.
+ *
+ * An architecture may also define CLEAR_DOUBLE(x) to be a fast way to
+ * clear the two words at GC_malloc-aligned address x. By default,
+ * word stores of 0 are used instead.
+ *
+ * HEAP_START may be defined as the initial address hint for mmap-based
+ * allocation.
+ */
+
+/* If we are using a recent version of gcc, we can use __builtin_unwind_init()
+ * to push the relevant registers onto the stack. This generally makes
+ * USE_GENERIC_PUSH_REGS the preferred approach for marking from registers.
+ */
+# if defined(__GNUC__) && ((__GNUC__ >= 3) || \
+ (__GNUC__ == 2 && __GNUC_MINOR__ >= 8)) \
+ && !defined(__INTEL_COMPILER) \
+ && !defined(__PATHCC__)
+# define HAVE_BUILTIN_UNWIND_INIT
+# endif
+
+# define STACK_GRAN 0x1000000
+# ifdef M68K
+# define MACH_TYPE "M68K"
+# define ALIGNMENT 2
+# ifdef OPENBSD
+# define OS_TYPE "OPENBSD"
+# define HEURISTIC2
+# ifdef __ELF__
+# define DATASTART GC_data_start
+# define DYNAMIC_LOADING
+# else
+ extern char etext[];
+# define DATASTART ((ptr_t)(etext))
+# endif
+# define USE_GENERIC_PUSH_REGS
+# endif
+# ifdef NETBSD
+# define OS_TYPE "NETBSD"
+# define HEURISTIC2
+# ifdef __ELF__
+# define DATASTART GC_data_start
+# define DYNAMIC_LOADING
+# else
+ extern char etext[];
+# define DATASTART ((ptr_t)(etext))
+# endif
+# define USE_GENERIC_PUSH_REGS
+# endif
+# ifdef LINUX
+# define OS_TYPE "LINUX"
+# define STACKBOTTOM ((ptr_t)0xf0000000)
+# define USE_GENERIC_PUSH_REGS
+ /* We never got around to the assembly version. */
+/* # define MPROTECT_VDB - Reported to not work 9/17/01 */
+# ifdef __ELF__
+# define DYNAMIC_LOADING
+# include <features.h>
+# if defined(__GLIBC__)&& __GLIBC__>=2
+# define SEARCH_FOR_DATA_START
+# else /* !GLIBC2 */
+ extern char **__environ;
+# define DATASTART ((ptr_t)(&__environ))
+ /* hideous kludge: __environ is the first */
+ /* word in crt0.o, and delimits the start */
+ /* of the data segment, no matter which */
+ /* ld options were passed through. */
+ /* We could use _etext instead, but that */
+ /* would include .rodata, which may */
+ /* contain large read-only data tables */
+ /* that we'd rather not scan. */
+# endif /* !GLIBC2 */
+ extern int _end[];
+# define DATAEND (_end)
+# else
+ extern int etext[];
+# define DATASTART ((ptr_t)((((word) (etext)) + 0xfff) & ~0xfff))
+# endif
+# endif
+# ifdef SUNOS4
+# define OS_TYPE "SUNOS4"
+ extern char etext[];
+# define DATASTART ((ptr_t)((((word) (etext)) + 0x1ffff) & ~0x1ffff))
+# define HEURISTIC1 /* differs */
+# define DYNAMIC_LOADING
+# endif
+# ifdef HP
+# define OS_TYPE "HP"
+ extern char etext[];
+# define DATASTART ((ptr_t)((((word) (etext)) + 0xfff) & ~0xfff))
+# define STACKBOTTOM ((ptr_t) 0xffeffffc)
+ /* empirically determined. seems to work. */
+# include <unistd.h>
+# define GETPAGESIZE() sysconf(_SC_PAGE_SIZE)
+# endif
+# ifdef SYSV
+# define OS_TYPE "SYSV"
+ extern etext[];
+# define DATASTART ((ptr_t)((((word) (etext)) + 0x3fffff) \
+ & ~0x3fffff) \
+ +((word)etext & 0x1fff))
+ /* This only works for shared-text binaries with magic number 0413.
+ The other sorts of SysV binaries put the data at the end of the text,
+ in which case the default of etext would work. Unfortunately,
+ handling both would require having the magic-number available.
+ -- Parag
+ */
+# define STACKBOTTOM ((ptr_t)0xFFFFFFFE)
+ /* The stack starts at the top of memory, but */
+ /* 0x0 cannot be used as setjump_test complains */
+ /* that the stack direction is incorrect. Two */
+ /* bytes down from 0x0 should be safe enough. */
+ /* --Parag */
+# include <sys/mmu.h>
+# define GETPAGESIZE() PAGESIZE /* Is this still right? */
+# endif
+# ifdef AMIGA
+# define OS_TYPE "AMIGA"
+ /* STACKBOTTOM and DATASTART handled specially */
+ /* in os_dep.c */
+# define DATAEND /* not needed */
+# define GETPAGESIZE() 4096
+# endif
+# ifdef MACOS
+# ifndef __LOWMEM__
+# include <LowMem.h>
+# endif
+# define OS_TYPE "MACOS"
+ /* see os_dep.c for details of global data segments. */
+# define STACKBOTTOM ((ptr_t) LMGetCurStackBase())
+# define DATAEND /* not needed */
+# define GETPAGESIZE() 4096
+# endif
+# ifdef NEXT
+# define OS_TYPE "NEXT"
+# define DATASTART ((ptr_t) get_etext())
+# define STACKBOTTOM ((ptr_t) 0x4000000)
+# define DATAEND /* not needed */
+# endif
+# endif
+
+# if defined(POWERPC)
+# define MACH_TYPE "POWERPC"
+# ifdef MACOS
+# define ALIGNMENT 2 /* Still necessary? Could it be 4? */
+# ifndef __LOWMEM__
+# include <LowMem.h>
+# endif
+# define OS_TYPE "MACOS"
+ /* see os_dep.c for details of global data segments. */
+# define STACKBOTTOM ((ptr_t) LMGetCurStackBase())
+# define DATAEND /* not needed */
+# endif
+# ifdef LINUX
+# if defined(__powerpc64__)
+# define ALIGNMENT 8
+# define CPP_WORDSZ 64
+# ifndef HBLKSIZE
+# define HBLKSIZE 4096
+# endif
+# else
+# define ALIGNMENT 4
+# endif
+# define OS_TYPE "LINUX"
+ /* HEURISTIC1 has been reliably reported to fail for a 32-bit */
+ /* executable on a 64 bit kernel. */
+# define LINUX_STACKBOTTOM
+# define DYNAMIC_LOADING
+# define SEARCH_FOR_DATA_START
+ extern int _end[];
+# define DATAEND (_end)
+# endif
+# ifdef DARWIN
+# define OS_TYPE "DARWIN"
+# define DYNAMIC_LOADING
+# if defined(__ppc64__)
+# define ALIGNMENT 8
+# define CPP_WORDSZ 64
+# define STACKBOTTOM ((ptr_t) 0x7fff5fc00000)
+# define CACHE_LINE_SIZE 64
+# ifndef HBLKSIZE
+# define HBLKSIZE 4096
+# endif
+# else
+# define ALIGNMENT 4
+# define STACKBOTTOM ((ptr_t) 0xc0000000)
+# endif
+ /* XXX: see get_end(3), get_etext() and get_end() should not be used.
+ These aren't used when dyld support is enabled (it is by default) */
+# define DATASTART ((ptr_t) get_etext())
+# define DATAEND ((ptr_t) get_end())
+# define USE_MMAP
+# define USE_MMAP_ANON
+# define USE_ASM_PUSH_REGS
+# ifdef GC_DARWIN_THREADS
+# define MPROTECT_VDB
+# endif
+# include <unistd.h>
+# define GETPAGESIZE() getpagesize()
+# if defined(USE_PPC_PREFETCH) && defined(__GNUC__)
+ /* The performance impact of prefetches is untested */
+# define PREFETCH(x) \
+ __asm__ __volatile__ ("dcbt 0,%0" : : "r" ((const void *) (x)))
+# define PREFETCH_FOR_WRITE(x) \
+ __asm__ __volatile__ ("dcbtst 0,%0" : : "r" ((const void *) (x)))
+# endif
+ /* There seems to be some issues with trylock hanging on darwin. This
+ should be looked into some more */
+# define NO_PTHREAD_TRYLOCK
+# endif
+# ifdef FREEBSD
+# define ALIGNMENT 4
+# define OS_TYPE "FREEBSD"
+# ifndef GC_FREEBSD_THREADS
+# define MPROTECT_VDB
+# endif
+# define SIG_SUSPEND SIGUSR1
+# define SIG_THR_RESTART SIGUSR2
+# define FREEBSD_STACKBOTTOM
+# ifdef __ELF__
+# define DYNAMIC_LOADING
+# endif
+ extern char etext[];
+ extern char * GC_FreeBSDGetDataStart();
+# define DATASTART GC_FreeBSDGetDataStart(0x1000, &etext)
+# endif
+# ifdef NETBSD
+# define ALIGNMENT 4
+# define OS_TYPE "NETBSD"
+# define HEURISTIC2
+ extern char etext[];
+# define DATASTART GC_data_start
+# define DYNAMIC_LOADING
+# endif
+# ifdef NOSYS
+# define ALIGNMENT 4
+# define OS_TYPE "NOSYS"
+ extern void __end[], __dso_handle[];
+# define DATASTART (__dso_handle) /* OK, that's ugly. */
+# define DATAEND (__end)
+ /* Stack starts at 0xE0000000 for the simulator. */
+# undef STACK_GRAN
+# define STACK_GRAN 0x10000000
+# define HEURISTIC1
+# endif
+# endif
+
+# ifdef VAX
+# define MACH_TYPE "VAX"
+# define ALIGNMENT 4 /* Pointers are longword aligned by 4.2 C compiler */
+ extern char etext[];
+# define DATASTART ((ptr_t)(etext))
+# ifdef BSD
+# define OS_TYPE "BSD"
+# define HEURISTIC1
+ /* HEURISTIC2 may be OK, but it's hard to test. */
+# endif
+# ifdef ULTRIX
+# define OS_TYPE "ULTRIX"
+# define STACKBOTTOM ((ptr_t) 0x7fffc800)
+# endif
+# endif
+
+# ifdef RT
+# define MACH_TYPE "RT"
+# define ALIGNMENT 4
+# define DATASTART ((ptr_t) 0x10000000)
+# define STACKBOTTOM ((ptr_t) 0x1fffd800)
+# endif
+
+# ifdef SPARC
+# define MACH_TYPE "SPARC"
+# if defined(__arch64__) || defined(__sparcv9)
+# define ALIGNMENT 8
+# define CPP_WORDSZ 64
+# define ELF_CLASS ELFCLASS64
+# else
+# define ALIGNMENT 4 /* Required by hardware */
+# define CPP_WORDSZ 32
+# endif
+# define ALIGN_DOUBLE
+# ifdef SUNOS5
+# define OS_TYPE "SUNOS5"
+ extern int _etext[];
+ extern int _end[];
+ extern ptr_t GC_SysVGetDataStart();
+# define DATASTART GC_SysVGetDataStart(0x10000, _etext)
+# define DATAEND (_end)
+# if !defined(USE_MMAP) && defined(REDIRECT_MALLOC)
+# define USE_MMAP
+ /* Otherwise we now use calloc. Mmap may result in the */
+ /* heap interleaved with thread stacks, which can result in */
+ /* excessive blacklisting. Sbrk is unusable since it */
+ /* doesn't interact correctly with the system malloc. */
+# endif
+# ifdef USE_MMAP
+# define HEAP_START (ptr_t)0x40000000
+# else
+# define HEAP_START DATAEND
+# endif
+# define PROC_VDB
+/* HEURISTIC1 reportedly no longer works under 2.7. */
+/* HEURISTIC2 probably works, but this appears to be preferable. */
+/* Apparently USRSTACK is defined to be USERLIMIT, but in some */
+/* installations that's undefined. We work around this with a */
+/* gross hack: */
+# include <sys/vmparam.h>
+# ifdef USERLIMIT
+ /* This should work everywhere, but doesn't. */
+# define STACKBOTTOM USRSTACK
+# else
+# define HEURISTIC2
+# endif
+# include <unistd.h>
+# define GETPAGESIZE() sysconf(_SC_PAGESIZE)
+ /* getpagesize() appeared to be missing from at least one */
+ /* Solaris 5.4 installation. Weird. */
+# define DYNAMIC_LOADING
+# endif
+# ifdef SUNOS4
+# define OS_TYPE "SUNOS4"
+ /* [If you have a weak stomach, don't read this.] */
+ /* We would like to use: */
+/* # define DATASTART ((ptr_t)((((word) (etext)) + 0x1fff) & ~0x1fff)) */
+ /* This fails occasionally, due to an ancient, but very */
+ /* persistent ld bug. etext is set 32 bytes too high. */
+ /* We instead read the text segment size from the a.out */
+ /* header, which happens to be mapped into our address space */
+ /* at the start of the text segment. The detective work here */
+ /* was done by Robert Ehrlich, Manuel Serrano, and Bernard */
+ /* Serpette of INRIA. */
+ /* This assumes ZMAGIC, i.e. demand-loadable executables. */
+# define TEXTSTART 0x2000
+# define DATASTART ((ptr_t)(*(int *)(TEXTSTART+0x4)+TEXTSTART))
+# define MPROTECT_VDB
+# define HEURISTIC1
+# define DYNAMIC_LOADING
+# endif
+# ifdef DRSNX
+# define OS_TYPE "DRSNX"
+ extern ptr_t GC_SysVGetDataStart();
+ extern int etext[];
+# define DATASTART GC_SysVGetDataStart(0x10000, etext)
+# define MPROTECT_VDB
+# define STACKBOTTOM ((ptr_t) 0xdfff0000)
+# define DYNAMIC_LOADING
+# endif
+# ifdef LINUX
+# define OS_TYPE "LINUX"
+# ifdef __ELF__
+# define DYNAMIC_LOADING
+# else
+ Linux Sparc/a.out not supported
+# endif
+ extern int _end[];
+ extern int _etext[];
+# define DATAEND (_end)
+# define SVR4
+ extern ptr_t GC_SysVGetDataStart();
+# ifdef __arch64__
+# define DATASTART GC_SysVGetDataStart(0x100000, _etext)
+# else
+# define DATASTART GC_SysVGetDataStart(0x10000, _etext)
+# endif
+# define LINUX_STACKBOTTOM
+# endif
+# ifdef OPENBSD
+# define OS_TYPE "OPENBSD"
+# define STACKBOTTOM ((ptr_t) 0xf8000000)
+ extern int etext[];
+# define DATASTART ((ptr_t)(etext))
+# endif
+# ifdef NETBSD
+# define OS_TYPE "NETBSD"
+# define HEURISTIC2
+# ifdef __ELF__
+# define DATASTART GC_data_start
+# define DYNAMIC_LOADING
+# else
+ extern char etext[];
+# define DATASTART ((ptr_t)(etext))
+# endif
+# endif
+# ifdef FREEBSD
+# define OS_TYPE "FREEBSD"
+# define SIG_SUSPEND SIGUSR1
+# define SIG_THR_RESTART SIGUSR2
+# define FREEBSD_STACKBOTTOM
+# ifdef __ELF__
+# define DYNAMIC_LOADING
+# endif
+ extern char etext[];
+ extern char edata[];
+ extern char end[];
+# define NEED_FIND_LIMIT
+# define DATASTART ((ptr_t)(&etext))
+# define DATAEND (GC_find_limit (DATASTART, TRUE))
+# define DATASTART2 ((ptr_t)(&edata))
+# define DATAEND2 ((ptr_t)(&end))
+# endif
+# endif
+
+# ifdef I386
+# define MACH_TYPE "I386"
+# if defined(__LP64__) || defined(_WIN64)
+# define CPP_WORDSZ 64
+# define ALIGNMENT 8
+# else
+# define CPP_WORDSZ 32
+# define ALIGNMENT 4
+ /* Appears to hold for all "32 bit" compilers */
+ /* except Borland. The -a4 option fixes */
+ /* Borland. */
+ /* Ivan Demakov: For Watcom the option is -zp4. */
+# endif
+# ifndef SMALL_CONFIG
+# define ALIGN_DOUBLE /* Not strictly necessary, but may give speed */
+ /* improvement on Pentiums. */
+# endif
+# ifdef HAVE_BUILTIN_UNWIND_INIT
+# define USE_GENERIC_PUSH_REGS
+# endif
+# ifdef SEQUENT
+# define OS_TYPE "SEQUENT"
+ extern int etext[];
+# define DATASTART ((ptr_t)((((word) (etext)) + 0xfff) & ~0xfff))
+# define STACKBOTTOM ((ptr_t) 0x3ffff000)
+# endif
+# ifdef BEOS
+# define OS_TYPE "BEOS"
+# include <OS.h>
+# define GETPAGESIZE() B_PAGE_SIZE
+ extern int etext[];
+# define DATASTART ((ptr_t)((((word) (etext)) + 0xfff) & ~0xfff))
+# endif
+# ifdef SUNOS5
+# define OS_TYPE "SUNOS5"
+ extern int _etext[], _end[];
+ extern ptr_t GC_SysVGetDataStart();
+# define DATASTART GC_SysVGetDataStart(0x1000, _etext)
+# define DATAEND (_end)
+/* # define STACKBOTTOM ((ptr_t)(_start)) worked through 2.7, */
+/* but reportedly breaks under 2.8. It appears that the stack */
+/* base is a property of the executable, so this should not break */
+/* old executables. */
+/* HEURISTIC2 probably works, but this appears to be preferable. */
+# include <sys/vm.h>
+# define STACKBOTTOM USRSTACK
+/* At least in Solaris 2.5, PROC_VDB gives wrong values for dirty bits. */
+/* It appears to be fixed in 2.8 and 2.9. */
+# ifdef SOLARIS25_PROC_VDB_BUG_FIXED
+# define PROC_VDB
+# endif
+# define DYNAMIC_LOADING
+# if !defined(USE_MMAP) && defined(REDIRECT_MALLOC)
+# define USE_MMAP
+ /* Otherwise we now use calloc. Mmap may result in the */
+ /* heap interleaved with thread stacks, which can result in */
+ /* excessive blacklisting. Sbrk is unusable since it */
+ /* doesn't interact correctly with the system malloc. */
+# endif
+# ifdef USE_MMAP
+# define HEAP_START (ptr_t)0x40000000
+# else
+# define HEAP_START DATAEND
+# endif
+# endif
+# ifdef SCO
+# define OS_TYPE "SCO"
+ extern int etext[];
+# define DATASTART ((ptr_t)((((word) (etext)) + 0x3fffff) \
+ & ~0x3fffff) \
+ +((word)etext & 0xfff))
+# define STACKBOTTOM ((ptr_t) 0x7ffffffc)
+# endif
+# ifdef SCO_ELF
+# define OS_TYPE "SCO_ELF"
+ extern int etext[];
+# define DATASTART ((ptr_t)(etext))
+# define STACKBOTTOM ((ptr_t) 0x08048000)
+# define DYNAMIC_LOADING
+# define ELF_CLASS ELFCLASS32
+# endif
+# ifdef DGUX
+# define OS_TYPE "DGUX"
+ extern int _etext, _end;
+ extern ptr_t GC_SysVGetDataStart();
+# define DATASTART GC_SysVGetDataStart(0x1000, &_etext)
+# define DATAEND (&_end)
+# define STACK_GROWS_DOWN
+# define HEURISTIC2
+# include <unistd.h>
+# define GETPAGESIZE() sysconf(_SC_PAGESIZE)
+# define DYNAMIC_LOADING
+# ifndef USE_MMAP
+# define USE_MMAP
+# endif /* USE_MMAP */
+# define MAP_FAILED (void *) -1
+# ifdef USE_MMAP
+# define HEAP_START (ptr_t)0x40000000
+# else /* USE_MMAP */
+# define HEAP_START DATAEND
+# endif /* USE_MMAP */
+# endif /* DGUX */
+
+# ifdef LINUX
+# ifndef __GNUC__
+ /* The Intel compiler doesn't like inline assembly */
+# define USE_GENERIC_PUSH_REGS
+# endif
+# define OS_TYPE "LINUX"
+# define LINUX_STACKBOTTOM
+# if 0
+# define HEURISTIC1
+# undef STACK_GRAN
+# define STACK_GRAN 0x10000000
+ /* STACKBOTTOM is usually 0xc0000000, but this changes with */
+ /* different kernel configurations. In particular, systems */
+ /* with 2GB physical memory will usually move the user */
+ /* address space limit, and hence initial SP to 0x80000000. */
+# endif
+# if !defined(GC_LINUX_THREADS) || !defined(REDIRECT_MALLOC)
+# define MPROTECT_VDB
+# else
+ /* We seem to get random errors in incremental mode, */
+ /* possibly because Linux threads is itself a malloc client */
+ /* and can't deal with the signals. */
+# endif
+# define HEAP_START (ptr_t)0x1000
+ /* This encourages mmap to give us low addresses, */
+ /* thus allowing the heap to grow to ~3GB */
+# ifdef __ELF__
+# define DYNAMIC_LOADING
+# ifdef UNDEFINED /* includes ro data */
+ extern int _etext[];
+# define DATASTART ((ptr_t)((((word) (_etext)) + 0xfff) & ~0xfff))
+# endif
+# include <features.h>
+# if defined(__GLIBC__) && __GLIBC__ >= 2
+# define SEARCH_FOR_DATA_START
+# else
+ extern char **__environ;
+# define DATASTART ((ptr_t)(&__environ))
+ /* hideous kludge: __environ is the first */
+ /* word in crt0.o, and delimits the start */
+ /* of the data segment, no matter which */
+ /* ld options were passed through. */
+ /* We could use _etext instead, but that */
+ /* would include .rodata, which may */
+ /* contain large read-only data tables */
+ /* that we'd rather not scan. */
+# endif
+ extern int _end[];
+# define DATAEND (_end)
+# else
+ extern int etext[];
+# define DATASTART ((ptr_t)((((word) (etext)) + 0xfff) & ~0xfff))
+# endif
+# ifdef USE_I686_PREFETCH
+ /* FIXME: Thus should use __builtin_prefetch, but we'll leave that */
+ /* for the next rtelease. */
+# define PREFETCH(x) \
+ __asm__ __volatile__ (" prefetchnta %0": : "m"(*(char *)(x)))
+ /* Empirically prefetcht0 is much more effective at reducing */
+ /* cache miss stalls for the targetted load instructions. But it */
+ /* seems to interfere enough with other cache traffic that the net */
+ /* result is worse than prefetchnta. */
+# if 0
+ /* Using prefetches for write seems to have a slight negative */
+ /* impact on performance, at least for a PIII/500. */
+# define PREFETCH_FOR_WRITE(x) \
+ __asm__ __volatile__ (" prefetcht0 %0": : "m"(*(char *)(x)))
+# endif
+# endif
+# ifdef USE_3DNOW_PREFETCH
+# define PREFETCH(x) \
+ __asm__ __volatile__ (" prefetch %0": : "m"(*(char *)(x)))
+# define PREFETCH_FOR_WRITE(x) \
+ __asm__ __volatile__ (" prefetchw %0": : "m"(*(char *)(x)))
+# endif
+# endif
+# ifdef CYGWIN32
+# define OS_TYPE "CYGWIN32"
+# define DATASTART ((ptr_t)GC_DATASTART) /* From gc.h */
+# define DATAEND ((ptr_t)GC_DATAEND)
+# undef STACK_GRAN
+# define STACK_GRAN 0x10000
+# define HEURISTIC1
+# endif
+# ifdef OS2
+# define OS_TYPE "OS2"
+ /* STACKBOTTOM and DATASTART are handled specially in */
+ /* os_dep.c. OS2 actually has the right */
+ /* system call! */
+# define DATAEND /* not needed */
+# define USE_GENERIC_PUSH_REGS
+# endif
+# ifdef MSWIN32
+# define OS_TYPE "MSWIN32"
+ /* STACKBOTTOM and DATASTART are handled specially in */
+ /* os_dep.c. */
+# ifndef __WATCOMC__
+# define MPROTECT_VDB
+# endif
+# define DATAEND /* not needed */
+# endif
+# ifdef MSWINCE
+# define OS_TYPE "MSWINCE"
+# define DATAEND /* not needed */
+# endif
+# ifdef DJGPP
+# define OS_TYPE "DJGPP"
+# include "stubinfo.h"
+ extern int etext[];
+ extern int _stklen;
+ extern int __djgpp_stack_limit;
+# define DATASTART ((ptr_t)((((word) (etext)) + 0x1ff) & ~0x1ff))
+/* # define STACKBOTTOM ((ptr_t)((word) _stubinfo + _stubinfo->size \
+ + _stklen)) */
+# define STACKBOTTOM ((ptr_t)((word) __djgpp_stack_limit + _stklen))
+ /* This may not be right. */
+# endif
+# ifdef OPENBSD
+# define OS_TYPE "OPENBSD"
+# endif
+# ifdef FREEBSD
+# define OS_TYPE "FREEBSD"
+# ifndef GC_FREEBSD_THREADS
+# define MPROTECT_VDB
+# endif
+# ifdef __GLIBC__
+# define SIG_SUSPEND (32+6)
+# define SIG_THR_RESTART (32+5)
+ extern int _end[];
+# define DATAEND (_end)
+# else
+# define SIG_SUSPEND SIGUSR1
+# define SIG_THR_RESTART SIGUSR2
+# endif
+# define FREEBSD_STACKBOTTOM
+# ifdef __ELF__
+# define DYNAMIC_LOADING
+# endif
+ extern char etext[];
+ extern char * GC_FreeBSDGetDataStart();
+# define DATASTART GC_FreeBSDGetDataStart(0x1000, &etext)
+# endif
+# ifdef NETBSD
+# define OS_TYPE "NETBSD"
+# ifdef __ELF__
+# define DYNAMIC_LOADING
+# endif
+# endif
+# ifdef THREE86BSD
+# define OS_TYPE "THREE86BSD"
+# endif
+# ifdef BSDI
+# define OS_TYPE "BSDI"
+# endif
+# if defined(OPENBSD) || defined(NETBSD) \
+ || defined(THREE86BSD) || defined(BSDI)
+# define HEURISTIC2
+ extern char etext[];
+# define DATASTART ((ptr_t)(etext))
+# endif
+# ifdef NEXT
+# define OS_TYPE "NEXT"
+# define DATASTART ((ptr_t) get_etext())
+# define STACKBOTTOM ((ptr_t)0xc0000000)
+# define DATAEND /* not needed */
+# endif
+# ifdef DOS4GW
+# define OS_TYPE "DOS4GW"
+ extern long __nullarea;
+ extern char _end;
+ extern char *_STACKTOP;
+ /* Depending on calling conventions Watcom C either precedes
+ or does not precedes with undescore names of C-variables.
+ Make sure startup code variables always have the same names. */
+ #pragma aux __nullarea "*";
+ #pragma aux _end "*";
+# define STACKBOTTOM ((ptr_t) _STACKTOP)
+ /* confused? me too. */
+# define DATASTART ((ptr_t) &__nullarea)
+# define DATAEND ((ptr_t) &_end)
+# endif
+# ifdef HURD
+# define OS_TYPE "HURD"
+# define STACK_GROWS_DOWN
+# define HEURISTIC2
+ extern int __data_start[];
+# define DATASTART ( (ptr_t) (__data_start))
+ extern int _end[];
+# define DATAEND ( (ptr_t) (_end))
+/* # define MPROTECT_VDB Not quite working yet? */
+# define DYNAMIC_LOADING
+# endif
+# ifdef DARWIN
+# define OS_TYPE "DARWIN"
+# define DARWIN_DONT_PARSE_STACK
+# define DYNAMIC_LOADING
+ /* XXX: see get_end(3), get_etext() and get_end() should not be used.
+ These aren't used when dyld support is enabled (it is by default) */
+# define DATASTART ((ptr_t) get_etext())
+# define DATAEND ((ptr_t) get_end())
+# define STACKBOTTOM ((ptr_t) 0xc0000000)
+# define USE_MMAP
+# define USE_MMAP_ANON
+# define USE_ASM_PUSH_REGS
+# ifdef GC_DARWIN_THREADS
+# define MPROTECT_VDB
+# endif
+# include <unistd.h>
+# define GETPAGESIZE() getpagesize()
+ /* There seems to be some issues with trylock hanging on darwin. This
+ should be looked into some more */
+# define NO_PTHREAD_TRYLOCK
+# endif /* DARWIN */
+# endif
+
+# ifdef NS32K
+# define MACH_TYPE "NS32K"
+# define ALIGNMENT 4
+ extern char **environ;
+# define DATASTART ((ptr_t)(&environ))
+ /* hideous kludge: environ is the first */
+ /* word in crt0.o, and delimits the start */
+ /* of the data segment, no matter which */
+ /* ld options were passed through. */
+# define STACKBOTTOM ((ptr_t) 0xfffff000) /* for Encore */
+# endif
+
+# ifdef MIPS
+# define MACH_TYPE "MIPS"
+# ifdef LINUX
+ /* This was developed for a linuxce style platform. Probably */
+ /* needs to be tweaked for workstation class machines. */
+# define OS_TYPE "LINUX"
+# define DYNAMIC_LOADING
+ extern int _end[];
+# define DATAEND (_end)
+ extern int __data_start[];
+# define DATASTART ((ptr_t)(__data_start))
+# define ALIGNMENT 4
+# define USE_GENERIC_PUSH_REGS
+# if __GLIBC__ == 2 && __GLIBC_MINOR__ >= 2 || __GLIBC__ > 2
+# define LINUX_STACKBOTTOM
+# else
+# define STACKBOTTOM 0x80000000
+# endif
+# endif /* Linux */
+# ifdef EWS4800
+# define HEURISTIC2
+# if defined(_MIPS_SZPTR) && (_MIPS_SZPTR == 64)
+ extern int _fdata[], _end[];
+# define DATASTART ((ptr_t)_fdata)
+# define DATAEND ((ptr_t)_end)
+# define CPP_WORDSZ _MIPS_SZPTR
+# define ALIGNMENT (_MIPS_SZPTR/8)
+# else
+ extern int etext[], edata[], end[];
+ extern int _DYNAMIC_LINKING[], _gp[];
+# define DATASTART ((ptr_t)((((word)etext + 0x3ffff) & ~0x3ffff) \
+ + ((word)etext & 0xffff)))
+# define DATAEND (edata)
+# define DATASTART2 (_DYNAMIC_LINKING \
+ ? (ptr_t)(((word)_gp + 0x8000 + 0x3ffff) & ~0x3ffff) \
+ : (ptr_t)edata)
+# define DATAEND2 (end)
+# define ALIGNMENT 4
+# endif
+# define OS_TYPE "EWS4800"
+# define USE_GENERIC_PUSH_REGS 1
+# endif
+# ifdef ULTRIX
+# define HEURISTIC2
+# define DATASTART (ptr_t)0x10000000
+ /* Could probably be slightly higher since */
+ /* startup code allocates lots of stuff. */
+# define OS_TYPE "ULTRIX"
+# define ALIGNMENT 4
+# endif
+# ifdef RISCOS
+# define HEURISTIC2
+# define DATASTART (ptr_t)0x10000000
+# define OS_TYPE "RISCOS"
+# define ALIGNMENT 4 /* Required by hardware */
+# endif
+# ifdef IRIX5
+# define HEURISTIC2
+ extern int _fdata[];
+# define DATASTART ((ptr_t)(_fdata))
+# ifdef USE_MMAP
+# define HEAP_START (ptr_t)0x30000000
+# else
+# define HEAP_START DATASTART
+# endif
+ /* Lowest plausible heap address. */
+ /* In the MMAP case, we map there. */
+ /* In either case it is used to identify */
+ /* heap sections so they're not */
+ /* considered as roots. */
+# define OS_TYPE "IRIX5"
+/*# define MPROTECT_VDB DOB: this should work, but there is evidence */
+/* of recent breakage. */
+# ifdef _MIPS_SZPTR
+# define CPP_WORDSZ _MIPS_SZPTR
+# define ALIGNMENT (_MIPS_SZPTR/8)
+# if CPP_WORDSZ != 64
+# define ALIGN_DOUBLE
+# endif
+# else
+# define ALIGNMENT 4
+# define ALIGN_DOUBLE
+# endif
+# define DYNAMIC_LOADING
+# endif
+# ifdef MSWINCE
+# define OS_TYPE "MSWINCE"
+# define ALIGNMENT 4
+# define DATAEND /* not needed */
+# endif
+# if defined(NETBSD)
+# define ALIGNMENT 4
+# define OS_TYPE "NETBSD"
+# define HEURISTIC2
+# define USE_GENERIC_PUSH_REGS
+# ifdef __ELF__
+ extern int etext[];
+# define DATASTART GC_data_start
+# define NEED_FIND_LIMIT
+# define DYNAMIC_LOADING
+# else
+# define DATASTART ((ptr_t) 0x10000000)
+# define STACKBOTTOM ((ptr_t) 0x7ffff000)
+# endif /* _ELF_ */
+# endif
+# endif
+
+# ifdef RS6000
+# define MACH_TYPE "RS6000"
+# ifdef ALIGNMENT
+# undef ALIGNMENT
+# endif
+# ifdef IA64
+# undef IA64 /* DOB: some AIX installs stupidly define IA64 in /usr/include/sys/systemcfg.h */
+# endif
+# ifdef __64BIT__
+# define ALIGNMENT 8
+# define CPP_WORDSZ 64
+# define STACKBOTTOM ((ptr_t)0x1000000000000000)
+# else
+# define ALIGNMENT 4
+# define CPP_WORDSZ 32
+# define STACKBOTTOM ((ptr_t)((ulong)&errno))
+# endif
+# define USE_MMAP
+# define USE_MMAP_ANON
+ /* From AIX linker man page:
+ _text Specifies the first location of the program.
+ _etext Specifies the first location after the program.
+ _data Specifies the first location of the data.
+ _edata Specifies the first location after the initialized data
+ _end or end Specifies the first location after all data.
+ */
+ extern int _data[], _end[];
+# define DATASTART ((ptr_t)((ulong)_data))
+# define DATAEND ((ptr_t)((ulong)_end))
+ extern int errno;
+# define USE_GENERIC_PUSH_REGS
+# define DYNAMIC_LOADING
+ /* For really old versions of AIX, this may have to be removed. */
+# endif
+
+# ifdef HP_PA
+# define MACH_TYPE "HP_PA"
+# ifdef __LP64__
+# define CPP_WORDSZ 64
+# define ALIGNMENT 8
+# else
+# define CPP_WORDSZ 32
+# define ALIGNMENT 4
+# define ALIGN_DOUBLE
+# endif
+# if !defined(GC_HPUX_THREADS) && !defined(GC_LINUX_THREADS)
+# ifndef LINUX /* For now. */
+# define MPROTECT_VDB
+# endif
+# else
+# define GENERIC_COMPARE_AND_SWAP
+ /* No compare-and-swap instruction. Use pthread mutexes */
+ /* when we absolutely have to. */
+# ifdef PARALLEL_MARK
+# define USE_MARK_BYTES
+ /* Minimize compare-and-swap usage. */
+# endif
+# endif
+# define STACK_GROWS_UP
+# ifdef HPUX
+# define OS_TYPE "HPUX"
+ extern int __data_start[];
+# define DATASTART ((ptr_t)(__data_start))
+# if 0
+ /* The following appears to work for 7xx systems running HP/UX */
+ /* 9.xx Furthermore, it might result in much faster */
+ /* collections than HEURISTIC2, which may involve scanning */
+ /* segments that directly precede the stack. It is not the */
+ /* default, since it may not work on older machine/OS */
+ /* combinations. (Thanks to Raymond X.T. Nijssen for uncovering */
+ /* this.) */
+# define STACKBOTTOM ((ptr_t) 0x7b033000) /* from /etc/conf/h/param.h */
+# else
+ /* Gustavo Rodriguez-Rivera suggested changing HEURISTIC2 */
+ /* to this. Note that the GC must be initialized before the */
+ /* first putenv call. */
+ extern char ** environ;
+# define STACKBOTTOM ((ptr_t)environ)
+# endif
+# define DYNAMIC_LOADING
+# include <unistd.h>
+# define GETPAGESIZE() sysconf(_SC_PAGE_SIZE)
+# ifndef __GNUC__
+# define PREFETCH(x) { \
+ register long addr = (long)(x); \
+ (void) _asm ("LDW", 0, 0, addr, 0); \
+ }
+# endif
+# endif /* HPUX */
+# ifdef LINUX
+# define OS_TYPE "LINUX"
+# define LINUX_STACKBOTTOM
+# define DYNAMIC_LOADING
+# define SEARCH_FOR_DATA_START
+ extern int _end[];
+# define DATAEND (&_end)
+# endif /* LINUX */
+# endif /* HP_PA */
+
+# ifdef ALPHA
+# define MACH_TYPE "ALPHA"
+# define ALIGNMENT 8
+# define CPP_WORDSZ 64
+# ifndef LINUX
+# define USE_GENERIC_PUSH_REGS
+ /* Gcc and probably the DEC/Compaq compiler spill pointers to preserved */
+ /* fp registers in some cases when the target is a 21264. The assembly */
+ /* code doesn't handle that yet, and version dependencies make that a */
+ /* bit tricky. Do the easy thing for now. */
+# endif
+# ifdef NETBSD
+# define OS_TYPE "NETBSD"
+# define HEURISTIC2
+# define DATASTART GC_data_start
+# define ELFCLASS32 32
+# define ELFCLASS64 64
+# define ELF_CLASS ELFCLASS64
+# define DYNAMIC_LOADING
+# endif
+# ifdef OPENBSD
+# define OS_TYPE "OPENBSD"
+# define HEURISTIC2
+# ifdef __ELF__ /* since OpenBSD/Alpha 2.9 */
+# define DATASTART GC_data_start
+# define ELFCLASS32 32
+# define ELFCLASS64 64
+# define ELF_CLASS ELFCLASS64
+# else /* ECOFF, until OpenBSD/Alpha 2.7 */
+# define DATASTART ((ptr_t) 0x140000000)
+# endif
+# endif
+# ifdef FREEBSD
+# define OS_TYPE "FREEBSD"
+/* MPROTECT_VDB is not yet supported at all on FreeBSD/alpha. */
+# define SIG_SUSPEND SIGUSR1
+# define SIG_THR_RESTART SIGUSR2
+# define FREEBSD_STACKBOTTOM
+# ifdef __ELF__
+# define DYNAMIC_LOADING
+# endif
+/* Handle unmapped hole alpha*-*-freebsd[45]* puts between etext and edata. */
+ extern char etext[];
+ extern char edata[];
+ extern char end[];
+# define NEED_FIND_LIMIT
+# define DATASTART ((ptr_t)(&etext))
+# define DATAEND (GC_find_limit (DATASTART, TRUE))
+# define DATASTART2 ((ptr_t)(&edata))
+# define DATAEND2 ((ptr_t)(&end))
+# endif
+# ifdef OSF1
+# define OS_TYPE "OSF1"
+# define DATASTART ((ptr_t) 0x140000000)
+ extern int _end[];
+# define DATAEND ((ptr_t) &_end)
+ extern char ** environ;
+ /* round up from the value of environ to the nearest page boundary */
+ /* Probably breaks if putenv is called before collector */
+ /* initialization. */
+# define STACKBOTTOM ((ptr_t)(((word)(environ) | (getpagesize()-1))+1))
+/* # define HEURISTIC2 */
+ /* Normally HEURISTIC2 is too conervative, since */
+ /* the text segment immediately follows the stack. */
+ /* Hence we give an upper pound. */
+ /* This is currently unused, since we disabled HEURISTIC2 */
+ extern int __start[];
+# define HEURISTIC2_LIMIT ((ptr_t)((word)(__start) & ~(getpagesize()-1)))
+# ifndef GC_OSF1_THREADS
+ /* Unresolved signal issues with threads. */
+# define MPROTECT_VDB
+# endif
+# define DYNAMIC_LOADING
+# endif
+# ifdef LINUX
+# define OS_TYPE "LINUX"
+# define LINUX_STACKBOTTOM
+# ifdef __ELF__
+# define SEARCH_FOR_DATA_START
+# define DYNAMIC_LOADING
+# else
+# define DATASTART ((ptr_t) 0x140000000)
+# endif
+ extern int _end[];
+# define DATAEND (_end)
+# define MPROTECT_VDB
+ /* Has only been superficially tested. May not */
+ /* work on all versions. */
+# endif
+# endif
+
+# ifdef IA64
+# define MACH_TYPE "IA64"
+# define USE_GENERIC_PUSH_REGS
+ /* We need to get preserved registers in addition to register */
+ /* windows. That's easiest to do with setjmp. */
+# ifdef PARALLEL_MARK
+# define USE_MARK_BYTES
+ /* Compare-and-exchange is too expensive to use for */
+ /* setting mark bits. */
+# endif
+# ifdef HPUX
+# ifdef _ILP32
+# define CPP_WORDSZ 32
+# define ALIGN_DOUBLE
+ /* Requires 8 byte alignment for malloc */
+# define ALIGNMENT 4
+# else
+# ifndef _LP64
+ ---> unknown ABI
+# endif
+# define CPP_WORDSZ 64
+# define ALIGN_DOUBLE
+ /* Requires 16 byte alignment for malloc */
+# define ALIGNMENT 8
+# endif
+# define OS_TYPE "HPUX"
+ extern int __data_start[];
+# define DATASTART ((ptr_t)(__data_start))
+ /* Gustavo Rodriguez-Rivera suggested changing HEURISTIC2 */
+ /* to this. Note that the GC must be initialized before the */
+ /* first putenv call. */
+ extern char ** environ;
+# define STACKBOTTOM ((ptr_t)environ)
+# define HPUX_STACKBOTTOM
+# define DYNAMIC_LOADING
+# include <unistd.h>
+# define GETPAGESIZE() sysconf(_SC_PAGE_SIZE)
+ /* The following was empirically determined, and is probably */
+ /* not very robust. */
+ /* Note that the backing store base seems to be at a nice */
+ /* address minus one page. */
+# define BACKING_STORE_DISPLACEMENT 0x1000000
+# define BACKING_STORE_ALIGNMENT 0x1000
+ extern ptr_t GC_register_stackbottom;
+# define BACKING_STORE_BASE GC_register_stackbottom
+ /* Known to be wrong for recent HP/UX versions!!! */
+# endif
+# ifdef LINUX
+# define CPP_WORDSZ 64
+# define ALIGN_DOUBLE
+ /* Requires 16 byte alignment for malloc */
+# define ALIGNMENT 8
+# define OS_TYPE "LINUX"
+ /* The following works on NUE and older kernels: */
+/* # define STACKBOTTOM ((ptr_t) 0xa000000000000000l) */
+ /* This does not work on NUE: */
+# define LINUX_STACKBOTTOM
+ /* We also need the base address of the register stack */
+ /* backing store. This is computed in */
+ /* GC_linux_register_stack_base based on the following */
+ /* constants: */
+# define BACKING_STORE_ALIGNMENT 0x100000
+# define BACKING_STORE_DISPLACEMENT 0x80000000
+ extern ptr_t GC_register_stackbottom;
+# define BACKING_STORE_BASE GC_register_stackbottom
+# define SEARCH_FOR_DATA_START
+# ifdef __GNUC__
+# define DYNAMIC_LOADING
+# else
+ /* In the Intel compiler environment, we seem to end up with */
+ /* statically linked executables and an undefined reference */
+ /* to _DYNAMIC */
+# endif
+# define MPROTECT_VDB
+ /* Requires Linux 2.3.47 or later. */
+ extern int _end[];
+# define DATAEND (_end)
+# ifdef __GNUC__
+# ifndef __INTEL_COMPILER
+# define PREFETCH(x) \
+ __asm__ (" lfetch [%0]": : "r"(x))
+# define PREFETCH_FOR_WRITE(x) \
+ __asm__ (" lfetch.excl [%0]": : "r"(x))
+# define CLEAR_DOUBLE(x) \
+ __asm__ (" stf.spill [%0]=f0": : "r"((void *)(x)))
+# else
+# include <ia64intrin.h>
+# define PREFETCH(x) \
+ __lfetch(__lfhint_none, (x))
+# define PREFETCH_FOR_WRITE(x) \
+ __lfetch(__lfhint_nta, (x))
+# define CLEAR_DOUBLE(x) \
+ __stf_spill((void *)(x), 0)
+# endif // __INTEL_COMPILER
+# endif
+# endif
+# ifdef MSWIN32
+ /* FIXME: This is a very partial guess. There is no port, yet. */
+# define OS_TYPE "MSWIN32"
+ /* STACKBOTTOM and DATASTART are handled specially in */
+ /* os_dep.c. */
+# define DATAEND /* not needed */
+# if defined(_WIN64)
+# define CPP_WORDSZ 64
+# else
+# define CPP_WORDSZ 32 /* Is this possible? */
+# endif
+# define ALIGNMENT 8
+# endif
+# endif
+
+# ifdef M88K
+# define MACH_TYPE "M88K"
+# define ALIGNMENT 4
+# define ALIGN_DOUBLE
+ extern int etext[];
+# ifdef CX_UX
+# define OS_TYPE "CX_UX"
+# define DATASTART ((((word)etext + 0x3fffff) & ~0x3fffff) + 0x10000)
+# endif
+# ifdef DGUX
+# define OS_TYPE "DGUX"
+ extern ptr_t GC_SysVGetDataStart();
+# define DATASTART GC_SysVGetDataStart(0x10000, etext)
+# endif
+# define STACKBOTTOM ((char*)0xf0000000) /* determined empirically */
+# endif
+
+# ifdef S370
+ /* If this still works, and if anyone cares, this should probably */
+ /* be moved to the S390 category. */
+# define MACH_TYPE "S370"
+# define ALIGNMENT 4 /* Required by hardware */
+# define USE_GENERIC_PUSH_REGS
+# ifdef UTS4
+# define OS_TYPE "UTS4"
+ extern int etext[];
+ extern int _etext[];
+ extern int _end[];
+ extern ptr_t GC_SysVGetDataStart();
+# define DATASTART GC_SysVGetDataStart(0x10000, _etext)
+# define DATAEND (_end)
+# define HEURISTIC2
+# endif
+# endif
+
+# ifdef S390
+# define MACH_TYPE "S390"
+# define USE_GENERIC_PUSH_REGS
+# ifndef __s390x__
+# define ALIGNMENT 4
+# define CPP_WORDSZ 32
+# else
+# define ALIGNMENT 8
+# define CPP_WORDSZ 64
+# endif
+# ifndef HBLKSIZE
+# define HBLKSIZE 4096
+# endif
+# ifdef LINUX
+# define OS_TYPE "LINUX"
+# define LINUX_STACKBOTTOM
+# define DYNAMIC_LOADING
+ extern int __data_start[];
+# define DATASTART ((ptr_t)(__data_start))
+ extern int _end[];
+# define DATAEND (_end)
+# define CACHE_LINE_SIZE 256
+# define GETPAGESIZE() 4096
+# endif
+# endif
+
+# if defined(PJ)
+# define ALIGNMENT 4
+ extern int _etext[];
+# define DATASTART ((ptr_t)(_etext))
+# define HEURISTIC1
+# endif
+
+# ifdef ARM32
+# define CPP_WORDSZ 32
+# define MACH_TYPE "ARM32"
+# define ALIGNMENT 4
+# ifdef NETBSD
+# define OS_TYPE "NETBSD"
+# define HEURISTIC2
+# ifdef __ELF__
+# define DATASTART GC_data_start
+# define DYNAMIC_LOADING
+# else
+ extern char etext[];
+# define DATASTART ((ptr_t)(etext))
+# endif
+# define USE_GENERIC_PUSH_REGS
+# endif
+# ifdef LINUX
+# define OS_TYPE "LINUX"
+# define LINUX_STACKBOTTOM
+# undef STACK_GRAN
+# define STACK_GRAN 0x10000000
+# define USE_GENERIC_PUSH_REGS
+# ifdef __ELF__
+# define DYNAMIC_LOADING
+# include <features.h>
+# if defined(__GLIBC__) && __GLIBC__ >= 2
+# define SEARCH_FOR_DATA_START
+# else
+ extern char **__environ;
+# define DATASTART ((ptr_t)(&__environ))
+ /* hideous kludge: __environ is the first */
+ /* word in crt0.o, and delimits the start */
+ /* of the data segment, no matter which */
+ /* ld options were passed through. */
+ /* We could use _etext instead, but that */
+ /* would include .rodata, which may */
+ /* contain large read-only data tables */
+ /* that we'd rather not scan. */
+# endif
+ extern int _end[];
+# define DATAEND (_end)
+# else
+ extern int etext[];
+# define DATASTART ((ptr_t)((((word) (etext)) + 0xfff) & ~0xfff))
+# endif
+# endif
+# ifdef MSWINCE
+# define OS_TYPE "MSWINCE"
+# define DATAEND /* not needed */
+# endif
+# ifdef NOSYS
+ /* __data_start is usually defined in the target linker script. */
+ extern int __data_start[];
+# define DATASTART (ptr_t)(__data_start)
+# define USE_GENERIC_PUSH_REGS
+ /* __stack_base__ is set in newlib/libc/sys/arm/crt0.S */
+ extern void *__stack_base__;
+# define STACKBOTTOM ((ptr_t) (__stack_base__))
+# endif
+#endif
+
+# ifdef CRIS
+# define MACH_TYPE "CRIS"
+# define CPP_WORDSZ 32
+# define ALIGNMENT 1
+# define OS_TYPE "LINUX"
+# define DYNAMIC_LOADING
+# define LINUX_STACKBOTTOM
+# define USE_GENERIC_PUSH_REGS
+# define SEARCH_FOR_DATA_START
+ extern int _end[];
+# define DATAEND (_end)
+# endif
+
+# ifdef SH
+# define MACH_TYPE "SH"
+# define ALIGNMENT 4
+# ifdef MSWINCE
+# define OS_TYPE "MSWINCE"
+# define DATAEND /* not needed */
+# endif
+# ifdef LINUX
+# define OS_TYPE "LINUX"
+# define LINUX_STACKBOTTOM
+# define USE_GENERIC_PUSH_REGS
+# define DYNAMIC_LOADING
+# define SEARCH_FOR_DATA_START
+ extern int _end[];
+# define DATAEND (_end)
+# endif
+# ifdef NETBSD
+# define OS_TYPE "NETBSD"
+# define HEURISTIC2
+# define DATASTART GC_data_start
+# define USE_GENERIC_PUSH_REGS
+# define DYNAMIC_LOADING
+# endif
+# endif
+
+# ifdef SH4
+# define MACH_TYPE "SH4"
+# define OS_TYPE "MSWINCE"
+# define ALIGNMENT 4
+# define DATAEND /* not needed */
+# endif
+
+# ifdef M32R
+# define CPP_WORDSZ 32
+# define MACH_TYPE "M32R"
+# define ALIGNMENT 4
+# ifdef LINUX
+# define OS_TYPE "LINUX"
+# define LINUX_STACKBOTTOM
+# undef STACK_GRAN
+# define STACK_GRAN 0x10000000
+# define USE_GENERIC_PUSH_REGS
+# define DYNAMIC_LOADING
+# define SEARCH_FOR_DATA_START
+ extern int _end[];
+# define DATAEND (_end)
+# endif
+# endif
+
+# ifdef X86_64
+# define MACH_TYPE "X86_64"
+# define ALIGNMENT 8
+# define CPP_WORDSZ 64
+# ifndef HBLKSIZE
+# define HBLKSIZE 4096
+# endif
+# define CACHE_LINE_SIZE 64
+# define USE_GENERIC_PUSH_REGS
+# ifdef LINUX
+# define OS_TYPE "LINUX"
+# define LINUX_STACKBOTTOM
+# if !defined(GC_LINUX_THREADS) || !defined(REDIRECT_MALLOC)
+# define MPROTECT_VDB
+# else
+ /* We seem to get random errors in incremental mode, */
+ /* possibly because Linux threads is itself a malloc client */
+ /* and can't deal with the signals. */
+# endif
+# ifdef __ELF__
+# define DYNAMIC_LOADING
+# ifdef UNDEFINED /* includes ro data */
+ extern int _etext[];
+# define DATASTART ((ptr_t)((((word) (_etext)) + 0xfff) & ~0xfff))
+# endif
+# include <features.h>
+# define SEARCH_FOR_DATA_START
+ extern int _end[];
+# define DATAEND (_end)
+# else
+ extern int etext[];
+# define DATASTART ((ptr_t)((((word) (etext)) + 0xfff) & ~0xfff))
+# endif
+# if defined(__GNUC__) && __GNUC >= 3
+# define PREFETCH(x) __builtin_prefetch((x), 0, 0)
+# define PREFETCH_FOR_WRITE(x) __builtin_prefetch((x), 1)
+# endif
+# endif
+# ifdef DARWIN
+# define OS_TYPE "DARWIN"
+# define DARWIN_DONT_PARSE_STACK
+# define DYNAMIC_LOADING
+ /* XXX: see get_end(3), get_etext() and get_end() should not be used.
+ These aren't used when dyld support is enabled (it is by default) */
+# define DATASTART ((ptr_t) get_etext())
+# define DATAEND ((ptr_t) get_end())
+# define STACKBOTTOM ((ptr_t) 0x7fff5fc00000)
+# define USE_MMAP
+# define USE_MMAP_ANON
+# ifdef GC_DARWIN_THREADS
+# define MPROTECT_VDB
+# endif
+# include <unistd.h>
+# define GETPAGESIZE() getpagesize()
+ /* There seems to be some issues with trylock hanging on darwin. This
+ should be looked into some more */
+# define NO_PTHREAD_TRYLOCK
+# endif
+# ifdef FREEBSD
+# define OS_TYPE "FREEBSD"
+# ifndef GC_FREEBSD_THREADS
+# define MPROTECT_VDB
+# endif
+# ifdef __GLIBC__
+# define SIG_SUSPEND (32+6)
+# define SIG_THR_RESTART (32+5)
+ extern int _end[];
+# define DATAEND (_end)
+# else
+# define SIG_SUSPEND SIGUSR1
+# define SIG_THR_RESTART SIGUSR2
+# endif
+# define FREEBSD_STACKBOTTOM
+# ifdef __ELF__
+# define DYNAMIC_LOADING
+# endif
+ extern char etext[];
+ extern char * GC_FreeBSDGetDataStart();
+# define DATASTART GC_FreeBSDGetDataStart(0x1000, &etext)
+# endif
+# ifdef NETBSD
+# define OS_TYPE "NETBSD"
+# ifdef __ELF__
+# define DYNAMIC_LOADING
+# endif
+# define HEURISTIC2
+ extern char etext[];
+# define SEARCH_FOR_DATA_START
+# endif
+# endif
+
+#if defined(LINUX) && defined(USE_MMAP)
+ /* The kernel may do a somewhat better job merging mappings etc. */
+ /* with anonymous mappings. */
+# define USE_MMAP_ANON
+#endif
+
+#if defined(LINUX) && defined(REDIRECT_MALLOC)
+ /* Rld appears to allocate some memory with its own allocator, and */
+ /* some through malloc, which might be redirected. To make this */
+ /* work with collectable memory, we have to scan memory allocated */
+ /* by rld's internal malloc. */
+# define USE_PROC_FOR_LIBRARIES
+#endif
+
+# ifndef STACK_GROWS_UP
+# define STACK_GROWS_DOWN
+# endif
+
+# ifndef CPP_WORDSZ
+# define CPP_WORDSZ 32
+# endif
+
+# ifndef OS_TYPE
+# define OS_TYPE ""
+# endif
+
+# ifndef DATAEND
+ extern int end[];
+# define DATAEND (end)
+# endif
+
+# if defined(SVR4) && !defined(GETPAGESIZE)
+# include <unistd.h>
+# define GETPAGESIZE() sysconf(_SC_PAGESIZE)
+# endif
+
+# ifndef GETPAGESIZE
+# if defined(SUNOS5) || defined(IRIX5)
+# include <unistd.h>
+# endif
+# define GETPAGESIZE() getpagesize()
+# endif
+
+# if defined(SUNOS5) || defined(DRSNX) || defined(UTS4)
+ /* OS has SVR4 generic features. Probably others also qualify. */
+# define SVR4
+# endif
+
+# if defined(SUNOS5) || defined(DRSNX)
+ /* OS has SUNOS5 style semi-undocumented interface to dynamic */
+ /* loader. */
+# define SUNOS5DL
+ /* OS has SUNOS5 style signal handlers. */
+# define SUNOS5SIGS
+# endif
+
+# if defined(HPUX)
+# define SUNOS5SIGS
+# endif
+
+# if defined(FREEBSD) && ((__FreeBSD__ >= 4) || (__FreeBSD_kernel__ >= 4))
+# define SUNOS5SIGS
+# endif
+
+# if defined(SVR4) || defined(LINUX) || defined(IRIX5) || defined(HPUX) \
+ || defined(OPENBSD) || defined(NETBSD) || defined(FREEBSD) \
+ || defined(DGUX) || defined(BSD) || defined(SUNOS4) \
+ || defined(_AIX) || defined(DARWIN) || defined(OSF1)
+# define UNIX_LIKE /* Basic Unix-like system calls work. */
+# endif
+
+# if CPP_WORDSZ != 32 && CPP_WORDSZ != 64
+ -> bad word size
+# endif
+
+# ifdef PCR
+# undef DYNAMIC_LOADING
+# undef STACKBOTTOM
+# undef HEURISTIC1
+# undef HEURISTIC2
+# undef PROC_VDB
+# undef MPROTECT_VDB
+# define PCR_VDB
+# endif
+
+# ifdef SRC_M3
+ /* Postponed for now. */
+# undef PROC_VDB
+# undef MPROTECT_VDB
+# endif
+
+# ifdef SMALL_CONFIG
+ /* Presumably not worth the space it takes. */
+# undef PROC_VDB
+# undef MPROTECT_VDB
+# endif
+
+# ifdef USE_MUNMAP
+# undef MPROTECT_VDB /* Can't deal with address space holes. */
+# endif
+
+# ifdef PARALLEL_MARK
+# undef MPROTECT_VDB /* For now. */
+# endif
+
+# if !defined(PCR_VDB) && !defined(PROC_VDB) && !defined(MPROTECT_VDB)
+# define DEFAULT_VDB
+# endif
+
+# ifndef PREFETCH
+# define PREFETCH(x)
+# define NO_PREFETCH
+# endif
+
+# ifndef PREFETCH_FOR_WRITE
+# define PREFETCH_FOR_WRITE(x)
+# define NO_PREFETCH_FOR_WRITE
+# endif
+
+# ifndef CACHE_LINE_SIZE
+# define CACHE_LINE_SIZE 32 /* Wild guess */
+# endif
+
+# if defined(LINUX) || defined(__GLIBC__)
+# define REGISTER_LIBRARIES_EARLY
+ /* We sometimes use dl_iterate_phdr, which may acquire an internal */
+ /* lock. This isn't safe after the world has stopped. So we must */
+ /* call GC_register_dynamic_libraries before stopping the world. */
+ /* For performance reasons, this may be beneficial on other */
+ /* platforms as well, though it should be avoided in win32. */
+# endif /* LINUX */
+
+# if defined(SEARCH_FOR_DATA_START)
+ extern ptr_t GC_data_start;
+# define DATASTART GC_data_start
+# endif
+
+# ifndef CLEAR_DOUBLE
+# define CLEAR_DOUBLE(x) \
+ ((word*)x)[0] = 0; \
+ ((word*)x)[1] = 0;
+# endif /* CLEAR_DOUBLE */
+
+ /* Internally we use GC_SOLARIS_THREADS to test for either old or pthreads. */
+# if defined(GC_SOLARIS_PTHREADS) && !defined(GC_SOLARIS_THREADS)
+# define GC_SOLARIS_THREADS
+# endif
+
+# if defined(GC_IRIX_THREADS) && !defined(IRIX5)
+ --> inconsistent configuration
+# endif
+# if defined(GC_LINUX_THREADS) && !defined(LINUX)
+ --> inconsistent configuration
+# endif
+# if defined(GC_SOLARIS_THREADS) && !defined(SUNOS5)
+ --> inconsistent configuration
+# endif
+# if defined(GC_HPUX_THREADS) && !defined(HPUX)
+ --> inconsistent configuration
+# endif
+# if defined(GC_AIX_THREADS) && !defined(_AIX)
+ --> inconsistent configuration
+# endif
+# if defined(GC_WIN32_THREADS) && !defined(MSWIN32) && !defined(CYGWIN32)
+ --> inconsistent configuration
+# endif
+
+# if defined(PCR) || defined(SRC_M3) || \
+ defined(GC_SOLARIS_THREADS) || defined(GC_WIN32_THREADS) || \
+ defined(GC_PTHREADS)
+# define THREADS
+# endif
+
+# if defined(HP_PA) || defined(M88K) \
+ || defined(POWERPC) && !defined(DARWIN) \
+ || defined(LINT) || defined(MSWINCE) || defined(ARM32) || defined(CRIS) \
+ || (defined(I386) && defined(__LCC__))
+ /* Use setjmp based hack to mark from callee-save registers. */
+ /* The define should move to the individual platform */
+ /* descriptions. */
+# define USE_GENERIC_PUSH_REGS
+# endif
+
+# if defined(MSWINCE)
+# define NO_GETENV
+# endif
+
+# if defined(SPARC)
+# define ASM_CLEAR_CODE /* Stack clearing is crucial, and we */
+ /* include assembly code to do it well. */
+# endif
+
+ /* Can we save call chain in objects for debugging? */
+ /* SET NFRAMES (# of saved frames) and NARGS (#of args for each */
+ /* frame) to reasonable values for the platform. */
+ /* Set SAVE_CALL_CHAIN if we can. SAVE_CALL_COUNT can be specified */
+ /* at build time, though we feel free to adjust it slightly. */
+ /* Define NEED_CALLINFO if we either save the call stack or */
+ /* GC_ADD_CALLER is defined. */
+ /* GC_CAN_SAVE_CALL_STACKS is set in gc.h. */
+
+#if defined(SPARC)
+# define CAN_SAVE_CALL_ARGS
+#endif
+#if (defined(I386) || defined(X86_64)) && (defined(LINUX) || defined(__GLIBC__))
+ /* SAVE_CALL_CHAIN is supported if the code is compiled to save */
+ /* frame pointers by default, i.e. no -fomit-frame-pointer flag. */
+# define CAN_SAVE_CALL_ARGS
+#endif
+
+# if defined(SAVE_CALL_COUNT) && !defined(GC_ADD_CALLER) \
+ && defined(GC_CAN_SAVE_CALL_STACKS)
+# define SAVE_CALL_CHAIN
+# endif
+# ifdef SAVE_CALL_CHAIN
+# if defined(SAVE_CALL_NARGS) && defined(CAN_SAVE_CALL_ARGS)
+# define NARGS SAVE_CALL_NARGS
+# else
+# define NARGS 0 /* Number of arguments to save for each call. */
+# endif
+# endif
+# ifdef SAVE_CALL_CHAIN
+# ifndef SAVE_CALL_COUNT
+# define NFRAMES 6 /* Number of frames to save. Even for */
+ /* alignment reasons. */
+# else
+# define NFRAMES ((SAVE_CALL_COUNT + 1) & ~1)
+# endif
+# define NEED_CALLINFO
+# endif /* SAVE_CALL_CHAIN */
+# ifdef GC_ADD_CALLER
+# define NFRAMES 1
+# define NARGS 0
+# define NEED_CALLINFO
+# endif
+
+# if defined(MAKE_BACK_GRAPH) && !defined(DBG_HDRS_ALL)
+# define DBG_HDRS_ALL
+# endif
+
+# if defined(POINTER_MASK) && !defined(POINTER_SHIFT)
+# define POINTER_SHIFT 0
+# endif
+
+# if defined(POINTER_SHIFT) && !defined(POINTER_MASK)
+# define POINTER_MASK ((GC_word)(-1))
+# endif
+
+# if !defined(FIXUP_POINTER) && defined(POINTER_MASK)
+# define FIXUP_POINTER(p) (p) = ((p) & (POINTER_MASK) << POINTER_SHIFT)
+# endif
+
+# if defined(FIXUP_POINTER)
+# define NEED_FIXUP_POINTER 1
+# else
+# define NEED_FIXUP_POINTER 0
+# define FIXUP_POINTER(p)
+# endif
+
+#ifdef GC_PRIVATE_H
+ /* This relies on some type definitions from gc_priv.h, from */
+ /* where it's normally included. */
+ /* */
+ /* How to get heap memory from the OS: */
+ /* Note that sbrk()-like allocation is preferred, since it */
+ /* usually makes it possible to merge consecutively allocated */
+ /* chunks. It also avoids unintented recursion with */
+ /* -DREDIRECT_MALLOC. */
+ /* GET_MEM() returns a HLKSIZE aligned chunk. */
+ /* 0 is taken to mean failure. */
+ /* In the case os USE_MMAP, the argument must also be a */
+ /* physical page size. */
+ /* GET_MEM is currently not assumed to retrieve 0 filled space, */
+ /* though we should perhaps take advantage of the case in which */
+ /* does. */
+ struct hblk; /* See gc_priv.h. */
+# ifdef PCR
+ char * real_malloc();
+# define GET_MEM(bytes) HBLKPTR(real_malloc((size_t)bytes + GC_page_size) \
+ + GC_page_size-1)
+# else
+# ifdef OS2
+ void * os2_alloc(size_t bytes);
+# define GET_MEM(bytes) HBLKPTR((ptr_t)os2_alloc((size_t)bytes \
+ + GC_page_size) \
+ + GC_page_size-1)
+# else
+# if defined(NEXT) || defined(DOS4GW) || \
+ (defined(AMIGA) && !defined(GC_AMIGA_FASTALLOC)) || \
+ (defined(SUNOS5) && !defined(USE_MMAP))
+# define GET_MEM(bytes) HBLKPTR((size_t) \
+ calloc(1, (size_t)bytes + GC_page_size) \
+ + GC_page_size-1)
+# else
+# ifdef MSWIN32
+ extern ptr_t GC_win32_get_mem();
+# define GET_MEM(bytes) (struct hblk *)GC_win32_get_mem(bytes)
+# else
+# ifdef MACOS
+# if defined(USE_TEMPORARY_MEMORY)
+ extern Ptr GC_MacTemporaryNewPtr(size_t size,
+ Boolean clearMemory);
+# define GET_MEM(bytes) HBLKPTR( \
+ GC_MacTemporaryNewPtr(bytes + GC_page_size, true) \
+ + GC_page_size-1)
+# else
+# define GET_MEM(bytes) HBLKPTR( \
+ NewPtrClear(bytes + GC_page_size) + GC_page_size-1)
+# endif
+# else
+# ifdef MSWINCE
+ extern ptr_t GC_wince_get_mem();
+# define GET_MEM(bytes) (struct hblk *)GC_wince_get_mem(bytes)
+# else
+# if defined(AMIGA) && defined(GC_AMIGA_FASTALLOC)
+ extern void *GC_amiga_get_mem(size_t size);
+# define GET_MEM(bytes) HBLKPTR((size_t) \
+ GC_amiga_get_mem((size_t)bytes + GC_page_size) \
+ + GC_page_size-1)
+# else
+ extern ptr_t GC_unix_get_mem();
+# define GET_MEM(bytes) (struct hblk *)GC_unix_get_mem(bytes)
+# endif
+# endif
+# endif
+# endif
+# endif
+# endif
+# endif
+
+#endif /* GC_PRIVATE_H */
+
+#if defined(_AIX) && !defined(__GNUC__) && !defined(__STDC__)
+ /* IBMs xlc compiler doesn't appear to follow the convention of */
+ /* defining __STDC__ to be zero in extended mode. */
+# define __STDC__ 0
+#endif
+
+# endif /* GCCONFIG_H */
Added: llvm-gcc-4.2/trunk/boehm-gc/include/private/pthread_stop_world.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/private/pthread_stop_world.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/private/pthread_stop_world.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/private/pthread_stop_world.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,11 @@
+#ifndef GC_PTHREAD_STOP_WORLD_H
+#define GC_PTHREAD_STOP_WORLD_H
+
+struct thread_stop_info {
+ word last_stop_count; /* GC_last_stop_count value when thread */
+ /* last successfully handled a suspend */
+ /* signal. */
+ ptr_t stack_ptr; /* Valid only when stopped. */
+};
+
+#endif
Added: llvm-gcc-4.2/trunk/boehm-gc/include/private/pthread_support.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/private/pthread_support.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/private/pthread_support.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/private/pthread_support.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,103 @@
+#ifndef GC_PTHREAD_SUPPORT_H
+#define GC_PTHREAD_SUPPORT_H
+
+# include "private/gc_priv.h"
+
+# if defined(GC_PTHREADS) && !defined(GC_SOLARIS_THREADS) \
+ && !defined(GC_WIN32_THREADS)
+
+#if defined(GC_DARWIN_THREADS)
+# include "private/darwin_stop_world.h"
+#else
+# include "private/pthread_stop_world.h"
+#endif
+
+/* We use the allocation lock to protect thread-related data structures. */
+
+/* The set of all known threads. We intercept thread creation and */
+/* joins. */
+/* Protected by allocation/GC lock. */
+/* Some of this should be declared volatile, but that's inconsistent */
+/* with some library routine declarations. */
+typedef struct GC_Thread_Rep {
+ struct GC_Thread_Rep * next; /* More recently allocated threads */
+ /* with a given pthread id come */
+ /* first. (All but the first are */
+ /* guaranteed to be dead, but we may */
+ /* not yet have registered the join.) */
+ pthread_t id;
+ /* Extra bookkeeping information the stopping code uses */
+ struct thread_stop_info stop_info;
+
+ short flags;
+# define FINISHED 1 /* Thread has exited. */
+# define DETACHED 2 /* Thread is intended to be detached. */
+# define MAIN_THREAD 4 /* True for the original thread only. */
+# define SUSPENDED 8 /* True if thread was suspended externally */
+ short thread_blocked; /* Protected by GC lock. */
+ /* Treated as a boolean value. If set, */
+ /* thread will acquire GC lock before */
+ /* doing any pointer manipulations, and */
+ /* has set its sp value. Thus it does */
+ /* not need to be sent a signal to stop */
+ /* it. */
+ ptr_t stack_end; /* Cold end of the stack. */
+# ifdef IA64
+ ptr_t backing_store_end;
+ ptr_t backing_store_ptr;
+# endif
+ void * status; /* The value returned from the thread. */
+ /* Used only to avoid premature */
+ /* reclamation of any data it might */
+ /* reference. */
+# ifdef THREAD_LOCAL_ALLOC
+# if CPP_WORDSZ == 64 && defined(ALIGN_DOUBLE)
+# define GRANULARITY 16
+# define NFREELISTS 49
+# else
+# define GRANULARITY 8
+# define NFREELISTS 65
+# endif
+ /* The ith free list corresponds to size i*GRANULARITY */
+# define INDEX_FROM_BYTES(n) ((ADD_SLOP(n) + GRANULARITY - 1)/GRANULARITY)
+# define BYTES_FROM_INDEX(i) ((i) * GRANULARITY - EXTRA_BYTES)
+# define SMALL_ENOUGH(bytes) (ADD_SLOP(bytes) <= \
+ (NFREELISTS-1)*GRANULARITY)
+ ptr_t ptrfree_freelists[NFREELISTS];
+ ptr_t normal_freelists[NFREELISTS];
+# ifdef GC_GCJ_SUPPORT
+ ptr_t gcj_freelists[NFREELISTS];
+# endif
+ /* Free lists contain either a pointer or a small count */
+ /* reflecting the number of granules allocated at that */
+ /* size. */
+ /* 0 ==> thread-local allocation in use, free list */
+ /* empty. */
+ /* > 0, <= DIRECT_GRANULES ==> Using global allocation, */
+ /* too few objects of this size have been */
+ /* allocated by this thread. */
+ /* >= HBLKSIZE => pointer to nonempty free list. */
+ /* > DIRECT_GRANULES, < HBLKSIZE ==> transition to */
+ /* local alloc, equivalent to 0. */
+# define DIRECT_GRANULES (HBLKSIZE/GRANULARITY)
+ /* Don't use local free lists for up to this much */
+ /* allocation. */
+# endif
+} * GC_thread;
+
+# define THREAD_TABLE_SZ 128 /* Must be power of 2 */
+extern volatile GC_thread GC_threads[THREAD_TABLE_SZ];
+
+extern GC_bool GC_thr_initialized;
+
+GC_thread GC_lookup_thread(pthread_t id);
+
+void GC_stop_init();
+
+extern GC_bool GC_in_thread_creation;
+ /* We may currently be in thread creation or destruction. */
+ /* Only set to TRUE while allocation lock is held. */
+ /* When set, it is OK to run GC from unknown thread. */
+
+#endif /* GC_PTHREADS && !GC_SOLARIS_THREADS.... etc */
+#endif /* GC_PTHREAD_SUPPORT_H */
Added: llvm-gcc-4.2/trunk/boehm-gc/include/private/solaris_threads.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/private/solaris_threads.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/private/solaris_threads.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/private/solaris_threads.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,37 @@
+#ifdef GC_SOLARIS_THREADS
+
+/* The set of all known threads. We intercept thread creation and */
+/* joins. We never actually create detached threads. We allocate all */
+/* new thread stacks ourselves. These allow us to maintain this */
+/* data structure. */
+/* Protected by GC_thr_lock. */
+/* Some of this should be declared volatile, but that's incosnsistent */
+/* with some library routine declarations. In particular, the */
+/* definition of cond_t doesn't mention volatile! */
+ typedef struct GC_Thread_Rep {
+ struct GC_Thread_Rep * next;
+ thread_t id;
+ word flags;
+# define FINISHED 1 /* Thread has exited. */
+# define DETACHED 2 /* Thread is intended to be detached. */
+# define CLIENT_OWNS_STACK 4
+ /* Stack was supplied by client. */
+# define SUSPNDED 8 /* Currently suspended. */
+ /* SUSPENDED is used insystem header. */
+ ptr_t stack;
+ size_t stack_size;
+ cond_t join_cv;
+ void * status;
+ } * GC_thread;
+ extern GC_thread GC_new_thread(thread_t id);
+
+ extern GC_bool GC_thr_initialized;
+ extern volatile GC_thread GC_threads[];
+ extern size_t GC_min_stack_sz;
+ extern size_t GC_page_sz;
+ extern void GC_thr_init(void);
+ extern ptr_t GC_stack_alloc(size_t * stack_size);
+ extern void GC_stack_free(ptr_t stack, size_t size);
+
+# endif /* GC_SOLARIS_THREADS */
+
Added: llvm-gcc-4.2/trunk/boehm-gc/include/private/specific.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/private/specific.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/private/specific.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/private/specific.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,95 @@
+/*
+ * This is a reimplementation of a subset of the pthread_getspecific/setspecific
+ * interface. This appears to outperform the standard linuxthreads one
+ * by a significant margin.
+ * The major restriction is that each thread may only make a single
+ * pthread_setspecific call on a single key. (The current data structure
+ * doesn't really require that. The restriction should be easily removable.)
+ * We don't currently support the destruction functions, though that
+ * could be done.
+ * We also currently assume that only one pthread_setspecific call
+ * can be executed at a time, though that assumption would be easy to remove
+ * by adding a lock.
+ */
+
+#include <errno.h>
+
+/* Called during key creation or setspecific. */
+/* For the GC we already hold lock. */
+/* Currently allocated objects leak on thread exit. */
+/* That's hard to fix, but OK if we allocate garbage */
+/* collected memory. */
+#define MALLOC_CLEAR(n) GC_INTERNAL_MALLOC(n, NORMAL)
+#define PREFIXED(name) GC_##name
+
+#define TS_CACHE_SIZE 1024
+#define CACHE_HASH(n) (((((long)n) >> 8) ^ (long)n) & (TS_CACHE_SIZE - 1))
+#define TS_HASH_SIZE 1024
+#define HASH(n) (((((long)n) >> 8) ^ (long)n) & (TS_HASH_SIZE - 1))
+
+/* An entry describing a thread-specific value for a given thread. */
+/* All such accessible structures preserve the invariant that if either */
+/* thread is a valid pthread id or qtid is a valid "quick tread id" */
+/* for a thread, then value holds the corresponding thread specific */
+/* value. This invariant must be preserved at ALL times, since */
+/* asynchronous reads are allowed. */
+typedef struct thread_specific_entry {
+ unsigned long qtid; /* quick thread id, only for cache */
+ void * value;
+ struct thread_specific_entry *next;
+ pthread_t thread;
+} tse;
+
+
+/* We represent each thread-specific datum as two tables. The first is */
+/* a cache, indexed by a "quick thread identifier". The "quick" thread */
+/* identifier is an easy to compute value, which is guaranteed to */
+/* determine the thread, though a thread may correspond to more than */
+/* one value. We typically use the address of a page in the stack. */
+/* The second is a hash table, indexed by pthread_self(). It is used */
+/* only as a backup. */
+
+/* Return the "quick thread id". Default version. Assumes page size, */
+/* or at least thread stack separation, is at least 4K. */
+/* Must be defined so that it never returns 0. (Page 0 can't really */
+/* be part of any stack, since that would make 0 a valid stack pointer.)*/
+static __inline__ unsigned long quick_thread_id() {
+ int dummy;
+ return (unsigned long)(&dummy) >> 12;
+}
+
+#define INVALID_QTID ((unsigned long)0)
+#define INVALID_THREADID ((pthread_t)0)
+
+typedef struct thread_specific_data {
+ tse * volatile cache[TS_CACHE_SIZE];
+ /* A faster index to the hash table */
+ tse * hash[TS_HASH_SIZE];
+ pthread_mutex_t lock;
+} tsd;
+
+typedef tsd * PREFIXED(key_t);
+
+extern int PREFIXED(key_create) (tsd ** key_ptr, void (* destructor)(void *));
+
+extern int PREFIXED(setspecific) (tsd * key, void * value);
+
+extern void PREFIXED(remove_specific) (tsd * key);
+
+/* An internal version of getspecific that assumes a cache miss. */
+void * PREFIXED(slow_getspecific) (tsd * key, unsigned long qtid,
+ tse * volatile * cache_entry);
+
+static __inline__ void * PREFIXED(getspecific) (tsd * key) {
+ long qtid = quick_thread_id();
+ unsigned hash_val = CACHE_HASH(qtid);
+ tse * volatile * entry_ptr = key -> cache + hash_val;
+ tse * entry = *entry_ptr; /* Must be loaded only once. */
+ if (EXPECT(entry -> qtid == qtid, 1)) {
+ GC_ASSERT(entry -> thread == pthread_self());
+ return entry -> value;
+ }
+ return PREFIXED(slow_getspecific) (key, qtid, entry_ptr);
+}
+
+
Added: llvm-gcc-4.2/trunk/boehm-gc/include/weakpointer.h
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/include/weakpointer.h?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/include/weakpointer.h (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/include/weakpointer.h Thu Nov 8 16:56:19 2007
@@ -0,0 +1,221 @@
+#ifndef _weakpointer_h_
+#define _weakpointer_h_
+
+/****************************************************************************
+
+WeakPointer and CleanUp
+
+ Copyright (c) 1991 by Xerox Corporation. All rights reserved.
+
+ THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+
+ Permission is hereby granted to copy this code for any purpose,
+ provided the above notices are retained on all copies.
+
+ Last modified on Mon Jul 17 18:16:01 PDT 1995 by ellis
+
+****************************************************************************/
+
+/****************************************************************************
+
+WeakPointer
+
+A weak pointer is a pointer to a heap-allocated object that doesn't
+prevent the object from being garbage collected. Weak pointers can be
+used to track which objects haven't yet been reclaimed by the
+collector. A weak pointer is deactivated when the collector discovers
+its referent object is unreachable by normal pointers (reachability
+and deactivation are defined more precisely below). A deactivated weak
+pointer remains deactivated forever.
+
+****************************************************************************/
+
+
+template< class T > class WeakPointer {
+public:
+
+WeakPointer( T* t = 0 )
+ /* Constructs a weak pointer for *t. t may be null. It is an error
+ if t is non-null and *t is not a collected object. */
+ {impl = _WeakPointer_New( t );}
+
+T* Pointer()
+ /* wp.Pointer() returns a pointer to the referent object of wp or
+ null if wp has been deactivated (because its referent object
+ has been discovered unreachable by the collector). */
+ {return (T*) _WeakPointer_Pointer( this->impl );}
+
+int operator==( WeakPointer< T > wp2 )
+ /* Given weak pointers wp1 and wp2, if wp1 == wp2, then wp1 and
+ wp2 refer to the same object. If wp1 != wp2, then either wp1
+ and wp2 don't refer to the same object, or if they do, one or
+ both of them has been deactivated. (Note: If objects t1 and t2
+ are never made reachable by their clean-up functions, then
+ WeakPointer<T>(t1) == WeakPointer<T>(t2) if and only t1 == t2.) */
+ {return _WeakPointer_Equal( this->impl, wp2.impl );}
+
+int Hash()
+ /* Returns a hash code suitable for use by multiplicative- and
+ division-based hash tables. If wp1 == wp2, then wp1.Hash() ==
+ wp2.Hash(). */
+ {return _WeakPointer_Hash( this->impl );}
+
+private:
+void* impl;
+};
+
+/*****************************************************************************
+
+CleanUp
+
+A garbage-collected object can have an associated clean-up function
+that will be invoked some time after the collector discovers the
+object is unreachable via normal pointers. Clean-up functions can be
+used to release resources such as open-file handles or window handles
+when their containing objects become unreachable. If a C++ object has
+a non-empty explicit destructor (i.e. it contains programmer-written
+code), the destructor will be automatically registered as the object's
+initial clean-up function.
+
+There is no guarantee that the collector will detect every unreachable
+object (though it will find almost all of them). Clients should not
+rely on clean-up to cause some action to occur immediately -- clean-up
+is only a mechanism for improving resource usage.
+
+Every object with a clean-up function also has a clean-up queue. When
+the collector finds the object is unreachable, it enqueues it on its
+queue. The clean-up function is applied when the object is removed
+from the queue. By default, objects are enqueued on the garbage
+collector's queue, and the collector removes all objects from its
+queue after each collection. If a client supplies another queue for
+objects, it is his responsibility to remove objects (and cause their
+functions to be called) by polling it periodically.
+
+Clean-up queues allow clean-up functions accessing global data to
+synchronize with the main program. Garbage collection can occur at any
+time, and clean-ups invoked by the collector might access data in an
+inconsistent state. A client can control this by defining an explicit
+queue for objects and polling it at safe points.
+
+The following definitions are used by the specification below:
+
+Given a pointer t to a collected object, the base object BO(t) is the
+value returned by new when it created the object. (Because of multiple
+inheritance, t and BO(t) may not be the same address.)
+
+A weak pointer wp references an object *t if BO(wp.Pointer()) ==
+BO(t).
+
+***************************************************************************/
+
+template< class T, class Data > class CleanUp {
+public:
+
+static void Set( T* t, void c( Data* d, T* t ), Data* d = 0 )
+ /* Sets the clean-up function of object BO(t) to be <c, d>,
+ replacing any previously defined clean-up function for BO(t); c
+ and d can be null, but t cannot. Sets the clean-up queue for
+ BO(t) to be the collector's queue. When t is removed from its
+ clean-up queue, its clean-up will be applied by calling c(d,
+ t). It is an error if *t is not a collected object. */
+ {_CleanUp_Set( t, c, d );}
+
+static void Call( T* t )
+ /* Sets the new clean-up function for BO(t) to be null and, if the
+ old one is non-null, calls it immediately, even if BO(t) is
+ still reachable. Deactivates any weak pointers to BO(t). */
+ {_CleanUp_Call( t );}
+
+class Queue {public:
+ Queue()
+ /* Constructs a new queue. */
+ {this->head = _CleanUp_Queue_NewHead();}
+
+ void Set( T* t )
+ /* q.Set(t) sets the clean-up queue of BO(t) to be q. */
+ {_CleanUp_Queue_Set( this->head, t );}
+
+ int Call()
+ /* If q is non-empty, q.Call() removes the first object and
+ calls its clean-up function; does nothing if q is
+ empty. Returns true if there are more objects in the
+ queue. */
+ {return _CleanUp_Queue_Call( this->head );}
+
+ private:
+ void* head;
+ };
+};
+
+/**********************************************************************
+
+Reachability and Clean-up
+
+An object O is reachable if it can be reached via a non-empty path of
+normal pointers from the registers, stacks, global variables, or an
+object with a non-null clean-up function (including O itself),
+ignoring pointers from an object to itself.
+
+This definition of reachability ensures that if object B is accessible
+from object A (and not vice versa) and if both A and B have clean-up
+functions, then A will always be cleaned up before B. Note that as
+long as an object with a clean-up function is contained in a cycle of
+pointers, it will always be reachable and will never be cleaned up or
+collected.
+
+When the collector finds an unreachable object with a null clean-up
+function, it atomically deactivates all weak pointers referencing the
+object and recycles its storage. If object B is accessible from object
+A via a path of normal pointers, A will be discovered unreachable no
+later than B, and a weak pointer to A will be deactivated no later
+than a weak pointer to B.
+
+When the collector finds an unreachable object with a non-null
+clean-up function, the collector atomically deactivates all weak
+pointers referencing the object, redefines its clean-up function to be
+null, and enqueues it on its clean-up queue. The object then becomes
+reachable again and remains reachable at least until its clean-up
+function executes.
+
+The clean-up function is assured that its argument is the only
+accessible pointer to the object. Nothing prevents the function from
+redefining the object's clean-up function or making the object
+reachable again (for example, by storing the pointer in a global
+variable).
+
+If the clean-up function does not make its object reachable again and
+does not redefine its clean-up function, then the object will be
+collected by a subsequent collection (because the object remains
+unreachable and now has a null clean-up function). If the clean-up
+function does make its object reachable again and a clean-up function
+is subsequently redefined for the object, then the new clean-up
+function will be invoked the next time the collector finds the object
+unreachable.
+
+Note that a destructor for a collected object cannot safely redefine a
+clean-up function for its object, since after the destructor executes,
+the object has been destroyed into "raw memory". (In most
+implementations, destroying an object mutates its vtbl.)
+
+Finally, note that calling delete t on a collected object first
+deactivates any weak pointers to t and then invokes its clean-up
+function (destructor).
+
+**********************************************************************/
+
+extern "C" {
+ void* _WeakPointer_New( void* t );
+ void* _WeakPointer_Pointer( void* wp );
+ int _WeakPointer_Equal( void* wp1, void* wp2 );
+ int _WeakPointer_Hash( void* wp );
+ void _CleanUp_Set( void* t, void (*c)( void* d, void* t ), void* d );
+ void _CleanUp_Call( void* t );
+ void* _CleanUp_Queue_NewHead ();
+ void _CleanUp_Queue_Set( void* h, void* t );
+ int _CleanUp_Queue_Call( void* h );
+}
+
+#endif /* _weakpointer_h_ */
+
+
Added: llvm-gcc-4.2/trunk/boehm-gc/mach_dep.c
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/mach_dep.c?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/mach_dep.c (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/mach_dep.c Thu Nov 8 16:56:19 2007
@@ -0,0 +1,627 @@
+/*
+ * Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
+ * Copyright (c) 1991-1994 by Xerox Corporation. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ */
+/* Boehm, November 17, 1995 12:13 pm PST */
+# include "private/gc_priv.h"
+# include <stdio.h>
+# include <setjmp.h>
+# if defined(OS2) || defined(CX_UX)
+# define _setjmp(b) setjmp(b)
+# define _longjmp(b,v) longjmp(b,v)
+# endif
+# ifdef AMIGA
+# ifndef __GNUC__
+# include <dos.h>
+# else
+# include <machine/reg.h>
+# endif
+# endif
+
+#if defined(RS6000) || defined(POWERPC)
+# include <ucontext.h>
+#endif
+
+#if defined(__MWERKS__) && !defined(POWERPC)
+
+asm static void PushMacRegisters()
+{
+ sub.w #4,sp // reserve space for one parameter.
+ move.l a2,(sp)
+ jsr GC_push_one
+ move.l a3,(sp)
+ jsr GC_push_one
+ move.l a4,(sp)
+ jsr GC_push_one
+# if !__option(a6frames)
+ // <pcb> perhaps a6 should be pushed if stack frames are not being used.
+ move.l a6,(sp)
+ jsr GC_push_one
+# endif
+ // skip a5 (globals), a6 (frame pointer), and a7 (stack pointer)
+ move.l d2,(sp)
+ jsr GC_push_one
+ move.l d3,(sp)
+ jsr GC_push_one
+ move.l d4,(sp)
+ jsr GC_push_one
+ move.l d5,(sp)
+ jsr GC_push_one
+ move.l d6,(sp)
+ jsr GC_push_one
+ move.l d7,(sp)
+ jsr GC_push_one
+ add.w #4,sp // fix stack.
+ rts
+}
+
+#endif /* __MWERKS__ */
+
+# if defined(SPARC) || defined(IA64)
+ /* Value returned from register flushing routine; either sp (SPARC) */
+ /* or ar.bsp (IA64) */
+ word GC_save_regs_ret_val;
+# endif
+
+/* Routine to mark from registers that are preserved by the C compiler. */
+/* This must be ported to every new architecture. There is a generic */
+/* version at the end, that is likely, but not guaranteed to work */
+/* on your architecture. Run the test_setjmp program to see whether */
+/* there is any chance it will work. */
+
+#if !defined(USE_GENERIC_PUSH_REGS) && !defined(USE_ASM_PUSH_REGS)
+#undef HAVE_PUSH_REGS
+void GC_push_regs()
+{
+# ifdef RT
+ register long TMP_SP; /* must be bound to r11 */
+# endif
+
+# ifdef VAX
+ /* VAX - generic code below does not work under 4.2 */
+ /* r1 through r5 are caller save, and therefore */
+ /* on the stack or dead. */
+ asm("pushl r11"); asm("calls $1,_GC_push_one");
+ asm("pushl r10"); asm("calls $1,_GC_push_one");
+ asm("pushl r9"); asm("calls $1,_GC_push_one");
+ asm("pushl r8"); asm("calls $1,_GC_push_one");
+ asm("pushl r7"); asm("calls $1,_GC_push_one");
+ asm("pushl r6"); asm("calls $1,_GC_push_one");
+# define HAVE_PUSH_REGS
+# endif
+# if defined(M68K) && (defined(SUNOS4) || defined(NEXT))
+ /* M68K SUNOS - could be replaced by generic code */
+ /* a0, a1 and d1 are caller save */
+ /* and therefore are on stack or dead. */
+
+ asm("subqw #0x4,sp"); /* allocate word on top of stack */
+
+ asm("movl a2,sp@"); asm("jbsr _GC_push_one");
+ asm("movl a3,sp@"); asm("jbsr _GC_push_one");
+ asm("movl a4,sp@"); asm("jbsr _GC_push_one");
+ asm("movl a5,sp@"); asm("jbsr _GC_push_one");
+ /* Skip frame pointer and stack pointer */
+ asm("movl d1,sp@"); asm("jbsr _GC_push_one");
+ asm("movl d2,sp@"); asm("jbsr _GC_push_one");
+ asm("movl d3,sp@"); asm("jbsr _GC_push_one");
+ asm("movl d4,sp@"); asm("jbsr _GC_push_one");
+ asm("movl d5,sp@"); asm("jbsr _GC_push_one");
+ asm("movl d6,sp@"); asm("jbsr _GC_push_one");
+ asm("movl d7,sp@"); asm("jbsr _GC_push_one");
+
+ asm("addqw #0x4,sp"); /* put stack back where it was */
+# define HAVE_PUSH_REGS
+# endif
+
+# if defined(M68K) && defined(HP)
+ /* M68K HP - could be replaced by generic code */
+ /* a0, a1 and d1 are caller save. */
+
+ asm("subq.w &0x4,%sp"); /* allocate word on top of stack */
+
+ asm("mov.l %a2,(%sp)"); asm("jsr _GC_push_one");
+ asm("mov.l %a3,(%sp)"); asm("jsr _GC_push_one");
+ asm("mov.l %a4,(%sp)"); asm("jsr _GC_push_one");
+ asm("mov.l %a5,(%sp)"); asm("jsr _GC_push_one");
+ /* Skip frame pointer and stack pointer */
+ asm("mov.l %d1,(%sp)"); asm("jsr _GC_push_one");
+ asm("mov.l %d2,(%sp)"); asm("jsr _GC_push_one");
+ asm("mov.l %d3,(%sp)"); asm("jsr _GC_push_one");
+ asm("mov.l %d4,(%sp)"); asm("jsr _GC_push_one");
+ asm("mov.l %d5,(%sp)"); asm("jsr _GC_push_one");
+ asm("mov.l %d6,(%sp)"); asm("jsr _GC_push_one");
+ asm("mov.l %d7,(%sp)"); asm("jsr _GC_push_one");
+
+ asm("addq.w &0x4,%sp"); /* put stack back where it was */
+# define HAVE_PUSH_REGS
+# endif /* M68K HP */
+
+# if defined(M68K) && defined(AMIGA)
+ /* AMIGA - could be replaced by generic code */
+ /* a0, a1, d0 and d1 are caller save */
+
+# ifdef __GNUC__
+ asm("subq.w &0x4,%sp"); /* allocate word on top of stack */
+
+ asm("mov.l %a2,(%sp)"); asm("jsr _GC_push_one");
+ asm("mov.l %a3,(%sp)"); asm("jsr _GC_push_one");
+ asm("mov.l %a4,(%sp)"); asm("jsr _GC_push_one");
+ asm("mov.l %a5,(%sp)"); asm("jsr _GC_push_one");
+ asm("mov.l %a6,(%sp)"); asm("jsr _GC_push_one");
+ /* Skip frame pointer and stack pointer */
+ asm("mov.l %d2,(%sp)"); asm("jsr _GC_push_one");
+ asm("mov.l %d3,(%sp)"); asm("jsr _GC_push_one");
+ asm("mov.l %d4,(%sp)"); asm("jsr _GC_push_one");
+ asm("mov.l %d5,(%sp)"); asm("jsr _GC_push_one");
+ asm("mov.l %d6,(%sp)"); asm("jsr _GC_push_one");
+ asm("mov.l %d7,(%sp)"); asm("jsr _GC_push_one");
+
+ asm("addq.w &0x4,%sp"); /* put stack back where it was */
+# define HAVE_PUSH_REGS
+# else /* !__GNUC__ */
+ GC_push_one(getreg(REG_A2));
+ GC_push_one(getreg(REG_A3));
+# ifndef __SASC
+ /* Can probably be changed to #if 0 -Kjetil M. (a4=globals)*/
+ GC_push_one(getreg(REG_A4));
+# endif
+ GC_push_one(getreg(REG_A5));
+ GC_push_one(getreg(REG_A6));
+ /* Skip stack pointer */
+ GC_push_one(getreg(REG_D2));
+ GC_push_one(getreg(REG_D3));
+ GC_push_one(getreg(REG_D4));
+ GC_push_one(getreg(REG_D5));
+ GC_push_one(getreg(REG_D6));
+ GC_push_one(getreg(REG_D7));
+# define HAVE_PUSH_REGS
+# endif /* !__GNUC__ */
+# endif /* AMIGA */
+
+# if defined(M68K) && defined(MACOS)
+# if defined(THINK_C)
+# define PushMacReg(reg) \
+ move.l reg,(sp) \
+ jsr GC_push_one
+ asm {
+ sub.w #4,sp ; reserve space for one parameter.
+ PushMacReg(a2);
+ PushMacReg(a3);
+ PushMacReg(a4);
+ ; skip a5 (globals), a6 (frame pointer), and a7 (stack pointer)
+ PushMacReg(d2);
+ PushMacReg(d3);
+ PushMacReg(d4);
+ PushMacReg(d5);
+ PushMacReg(d6);
+ PushMacReg(d7);
+ add.w #4,sp ; fix stack.
+ }
+# define HAVE_PUSH_REGS
+# undef PushMacReg
+# endif /* THINK_C */
+# if defined(__MWERKS__)
+ PushMacRegisters();
+# define HAVE_PUSH_REGS
+# endif /* __MWERKS__ */
+# endif /* MACOS */
+
+# if defined(I386) &&!defined(OS2) &&!defined(SVR4) \
+ && (defined(__MINGW32__) || !defined(MSWIN32)) \
+ && !defined(SCO) && !defined(SCO_ELF) \
+ && !(defined(LINUX) && defined(__ELF__)) \
+ && !(defined(FREEBSD) && defined(__ELF__)) \
+ && !(defined(NETBSD) && defined(__ELF__)) \
+ && !(defined(OPENBSD) && defined(__ELF__)) \
+ && !(defined(BEOS) && defined(__ELF__)) \
+ && !defined(DOS4GW) && !defined(HURD)
+ /* I386 code, generic code does not appear to work */
+ /* It does appear to work under OS2, and asms dont */
+ /* This is used for some 38g UNIX variants and for CYGWIN32 */
+ asm("pushl %eax"); asm("call _GC_push_one"); asm("addl $4,%esp");
+ asm("pushl %ecx"); asm("call _GC_push_one"); asm("addl $4,%esp");
+ asm("pushl %edx"); asm("call _GC_push_one"); asm("addl $4,%esp");
+ asm("pushl %ebp"); asm("call _GC_push_one"); asm("addl $4,%esp");
+ asm("pushl %esi"); asm("call _GC_push_one"); asm("addl $4,%esp");
+ asm("pushl %edi"); asm("call _GC_push_one"); asm("addl $4,%esp");
+ asm("pushl %ebx"); asm("call _GC_push_one"); asm("addl $4,%esp");
+# define HAVE_PUSH_REGS
+# endif
+
+# if ( defined(I386) && defined(LINUX) && defined(__ELF__) ) \
+ || ( defined(I386) && defined(FREEBSD) && defined(__ELF__) ) \
+ || ( defined(I386) && defined(NETBSD) && defined(__ELF__) ) \
+ || ( defined(I386) && defined(OPENBSD) && defined(__ELF__) ) \
+ || ( defined(I386) && defined(HURD) && defined(__ELF__) ) \
+ || ( defined(I386) && defined(DGUX) )
+
+ /* This is modified for Linux with ELF (Note: _ELF_ only) */
+ /* This section handles FreeBSD with ELF. */
+ /* Eax is caller-save and dead here. Other caller-save */
+ /* registers could also be skipped. We assume there are no */
+ /* pointers in MMX registers, etc. */
+ /* We combine instructions in a single asm to prevent gcc from */
+ /* inserting code in the middle. */
+ asm("pushl %ecx; call GC_push_one; addl $4,%esp");
+ asm("pushl %edx; call GC_push_one; addl $4,%esp");
+ asm("pushl %ebp; call GC_push_one; addl $4,%esp");
+ asm("pushl %esi; call GC_push_one; addl $4,%esp");
+ asm("pushl %edi; call GC_push_one; addl $4,%esp");
+ asm("pushl %ebx; call GC_push_one; addl $4,%esp");
+# define HAVE_PUSH_REGS
+# endif
+
+# if ( defined(I386) && defined(BEOS) && defined(__ELF__) )
+ /* As far as I can understand from */
+ /* http://www.beunited.org/articles/jbq/nasm.shtml, */
+ /* only ebp, esi, edi and ebx are not scratch. How MMX */
+ /* etc. registers should be treated, I have no idea. */
+ asm("pushl %ebp; call GC_push_one; addl $4,%esp");
+ asm("pushl %esi; call GC_push_one; addl $4,%esp");
+ asm("pushl %edi; call GC_push_one; addl $4,%esp");
+ asm("pushl %ebx; call GC_push_one; addl $4,%esp");
+# define HAVE_PUSH_REGS
+# endif
+
+# if defined(I386) && defined(MSWIN32) && !defined(__MINGW32__) \
+ && !defined(USE_GENERIC)
+ /* I386 code, Microsoft variant */
+ __asm push eax
+ __asm call GC_push_one
+ __asm add esp,4
+ __asm push ebx
+ __asm call GC_push_one
+ __asm add esp,4
+ __asm push ecx
+ __asm call GC_push_one
+ __asm add esp,4
+ __asm push edx
+ __asm call GC_push_one
+ __asm add esp,4
+ __asm push ebp
+ __asm call GC_push_one
+ __asm add esp,4
+ __asm push esi
+ __asm call GC_push_one
+ __asm add esp,4
+ __asm push edi
+ __asm call GC_push_one
+ __asm add esp,4
+# define HAVE_PUSH_REGS
+# endif
+
+# if defined(I386) && (defined(SVR4) || defined(SCO) || defined(SCO_ELF))
+ /* I386 code, SVR4 variant, generic code does not appear to work */
+ asm("pushl %eax"); asm("call GC_push_one"); asm("addl $4,%esp");
+ asm("pushl %ebx"); asm("call GC_push_one"); asm("addl $4,%esp");
+ asm("pushl %ecx"); asm("call GC_push_one"); asm("addl $4,%esp");
+ asm("pushl %edx"); asm("call GC_push_one"); asm("addl $4,%esp");
+ asm("pushl %ebp"); asm("call GC_push_one"); asm("addl $4,%esp");
+ asm("pushl %esi"); asm("call GC_push_one"); asm("addl $4,%esp");
+ asm("pushl %edi"); asm("call GC_push_one"); asm("addl $4,%esp");
+# define HAVE_PUSH_REGS
+# endif
+
+# ifdef NS32K
+ asm ("movd r3, tos"); asm ("bsr ?_GC_push_one"); asm ("adjspb $-4");
+ asm ("movd r4, tos"); asm ("bsr ?_GC_push_one"); asm ("adjspb $-4");
+ asm ("movd r5, tos"); asm ("bsr ?_GC_push_one"); asm ("adjspb $-4");
+ asm ("movd r6, tos"); asm ("bsr ?_GC_push_one"); asm ("adjspb $-4");
+ asm ("movd r7, tos"); asm ("bsr ?_GC_push_one"); asm ("adjspb $-4");
+# define HAVE_PUSH_REGS
+# endif
+
+# if defined(SPARC)
+ GC_save_regs_ret_val = GC_save_regs_in_stack();
+# define HAVE_PUSH_REGS
+# endif
+
+# ifdef RT
+ GC_push_one(TMP_SP); /* GC_push_one from r11 */
+
+ asm("cas r11, r6, r0"); GC_push_one(TMP_SP); /* r6 */
+ asm("cas r11, r7, r0"); GC_push_one(TMP_SP); /* through */
+ asm("cas r11, r8, r0"); GC_push_one(TMP_SP); /* r10 */
+ asm("cas r11, r9, r0"); GC_push_one(TMP_SP);
+ asm("cas r11, r10, r0"); GC_push_one(TMP_SP);
+
+ asm("cas r11, r12, r0"); GC_push_one(TMP_SP); /* r12 */
+ asm("cas r11, r13, r0"); GC_push_one(TMP_SP); /* through */
+ asm("cas r11, r14, r0"); GC_push_one(TMP_SP); /* r15 */
+ asm("cas r11, r15, r0"); GC_push_one(TMP_SP);
+# define HAVE_PUSH_REGS
+# endif
+
+# if defined(M68K) && defined(SYSV)
+ /* Once again similar to SUN and HP, though setjmp appears to work.
+ --Parag
+ */
+# ifdef __GNUC__
+ asm("subqw #0x4,%sp"); /* allocate word on top of stack */
+
+ asm("movl %a2,%sp@"); asm("jbsr GC_push_one");
+ asm("movl %a3,%sp@"); asm("jbsr GC_push_one");
+ asm("movl %a4,%sp@"); asm("jbsr GC_push_one");
+ asm("movl %a5,%sp@"); asm("jbsr GC_push_one");
+ /* Skip frame pointer and stack pointer */
+ asm("movl %d1,%sp@"); asm("jbsr GC_push_one");
+ asm("movl %d2,%sp@"); asm("jbsr GC_push_one");
+ asm("movl %d3,%sp@"); asm("jbsr GC_push_one");
+ asm("movl %d4,%sp@"); asm("jbsr GC_push_one");
+ asm("movl %d5,%sp@"); asm("jbsr GC_push_one");
+ asm("movl %d6,%sp@"); asm("jbsr GC_push_one");
+ asm("movl %d7,%sp@"); asm("jbsr GC_push_one");
+
+ asm("addqw #0x4,%sp"); /* put stack back where it was */
+# define HAVE_PUSH_REGS
+# else /* !__GNUC__*/
+ asm("subq.w &0x4,%sp"); /* allocate word on top of stack */
+
+ asm("mov.l %a2,(%sp)"); asm("jsr GC_push_one");
+ asm("mov.l %a3,(%sp)"); asm("jsr GC_push_one");
+ asm("mov.l %a4,(%sp)"); asm("jsr GC_push_one");
+ asm("mov.l %a5,(%sp)"); asm("jsr GC_push_one");
+ /* Skip frame pointer and stack pointer */
+ asm("mov.l %d1,(%sp)"); asm("jsr GC_push_one");
+ asm("mov.l %d2,(%sp)"); asm("jsr GC_push_one");
+ asm("mov.l %d3,(%sp)"); asm("jsr GC_push_one");
+ asm("mov.l %d4,(%sp)"); asm("jsr GC_push_one");
+ asm("mov.l %d5,(%sp)"); asm("jsr GC_push_one");
+ asm("mov.l %d6,(%sp)"); asm("jsr GC_push_one");
+ asm("mov.l %d7,(%sp)"); asm("jsr GC_push_one");
+
+ asm("addq.w &0x4,%sp"); /* put stack back where it was */
+# define HAVE_PUSH_REGS
+# endif /* !__GNUC__ */
+# endif /* M68K/SYSV */
+
+# if defined(PJ)
+ {
+ register int * sp asm ("optop");
+ extern int *__libc_stack_end;
+
+ GC_push_all_stack (sp, __libc_stack_end);
+# define HAVE_PUSH_REGS
+ /* Isn't this redundant with the code to push the stack? */
+ }
+# endif
+
+ /* other machines... */
+# if !defined(HAVE_PUSH_REGS)
+ --> We just generated an empty GC_push_regs, which
+ --> is almost certainly broken. Try defining
+ --> USE_GENERIC_PUSH_REGS instead.
+# endif
+}
+#endif /* !USE_GENERIC_PUSH_REGS && !USE_ASM_PUSH_REGS */
+
+void GC_with_callee_saves_pushed(fn, arg)
+void (*fn)();
+ptr_t arg;
+{
+ word dummy;
+
+# if defined(USE_GENERIC_PUSH_REGS)
+# ifdef HAVE_BUILTIN_UNWIND_INIT
+ /* This was suggested by Richard Henderson as the way to */
+ /* force callee-save registers and register windows onto */
+ /* the stack. */
+ __builtin_unwind_init();
+# else /* !HAVE_BUILTIN_UNWIND_INIT */
+# if defined(RS6000) || defined(POWERPC)
+ /* FIXME: RS6000 means AIX. */
+ /* This should probably be used in all Posix/non-gcc */
+ /* settings. We defer that change to minimize risk. */
+ ucontext_t ctxt;
+ getcontext(&ctxt);
+# else
+ /* Generic code */
+ /* The idea is due to Parag Patel at HP. */
+ /* We're not sure whether he would like */
+ /* to be he acknowledged for it or not. */
+ jmp_buf regs;
+ register word * i = (word *) regs;
+ register ptr_t lim = (ptr_t)(regs) + (sizeof regs);
+
+ /* Setjmp doesn't always clear all of the buffer. */
+ /* That tends to preserve garbage. Clear it. */
+ for (; (char *)i < lim; i++) {
+ *i = 0;
+ }
+# if defined(MSWIN32) || defined(MSWINCE) \
+ || defined(UTS4) || defined(LINUX) || defined(EWS4800)
+ (void) setjmp(regs);
+# else
+ (void) _setjmp(regs);
+ /* We don't want to mess with signals. According to */
+ /* SUSV3, setjmp() may or may not save signal mask. */
+ /* _setjmp won't, but is less portable. */
+# endif
+# endif /* !AIX ... */
+# endif /* !HAVE_BUILTIN_UNWIND_INIT */
+# else
+# if defined(PTHREADS) && !defined(MSWIN32) /* !USE_GENERIC_PUSH_REGS */
+ /* We may still need this to save thread contexts. */
+ ucontext_t ctxt;
+ getcontext(&ctxt);
+# else /* Shouldn't be needed */
+ ABORT("Unexpected call to GC_with_callee_saves_pushed");
+# endif
+# endif
+# if (defined(SPARC) && !defined(HAVE_BUILTIN_UNWIND_INIT)) \
+ || defined(IA64)
+ /* On a register window machine, we need to save register */
+ /* contents on the stack for this to work. The setjmp */
+ /* is probably not needed on SPARC, since pointers are */
+ /* only stored in windowed or scratch registers. It is */
+ /* needed on IA64, since some non-windowed registers are */
+ /* preserved. */
+ {
+ GC_save_regs_ret_val = GC_save_regs_in_stack();
+ /* On IA64 gcc, could use __builtin_ia64_flushrs() and */
+ /* __builtin_ia64_flushrs(). The latter will be done */
+ /* implicitly by __builtin_unwind_init() for gcc3.0.1 */
+ /* and later. */
+ }
+# endif
+ fn(arg);
+ /* Strongly discourage the compiler from treating the above */
+ /* as a tail-call, since that would pop the register */
+ /* contents before we get a chance to look at them. */
+ GC_noop1((word)(&dummy));
+}
+
+#if defined(USE_GENERIC_PUSH_REGS)
+void GC_generic_push_regs(cold_gc_frame)
+ptr_t cold_gc_frame;
+{
+ GC_with_callee_saves_pushed(GC_push_current_stack, cold_gc_frame);
+}
+#endif /* USE_GENERIC_PUSH_REGS */
+
+/* On register window machines, we need a way to force registers into */
+/* the stack. Return sp. */
+# ifdef SPARC
+ asm(" .seg \"text\"");
+# if defined(SVR4) || defined(NETBSD) || defined(FREEBSD)
+ asm(" .globl GC_save_regs_in_stack");
+ asm("GC_save_regs_in_stack:");
+ asm(" .type GC_save_regs_in_stack,#function");
+# else
+ asm(" .globl _GC_save_regs_in_stack");
+ asm("_GC_save_regs_in_stack:");
+# endif
+# if defined(__arch64__) || defined(__sparcv9)
+ asm(" save %sp,-128,%sp");
+ asm(" flushw");
+ asm(" ret");
+ asm(" restore %sp,2047+128,%o0");
+# else
+ asm(" ta 0x3 ! ST_FLUSH_WINDOWS");
+ asm(" retl");
+ asm(" mov %sp,%o0");
+# endif
+# ifdef SVR4
+ asm(" .GC_save_regs_in_stack_end:");
+ asm(" .size GC_save_regs_in_stack,.GC_save_regs_in_stack_end-GC_save_regs_in_stack");
+# endif
+# ifdef LINT
+ word GC_save_regs_in_stack() { return(0 /* sp really */);}
+# endif
+# endif
+
+/* On IA64, we also need to flush register windows. But they end */
+/* up on the other side of the stack segment. */
+/* Returns the backing store pointer for the register stack. */
+/* We now implement this as a separate assembly file, since inline */
+/* assembly code here doesn't work with either the Intel or HP */
+/* compilers. */
+# if 0
+# ifdef LINUX
+ asm(" .text");
+ asm(" .psr abi64");
+ asm(" .psr lsb");
+ asm(" .lsb");
+ asm("");
+ asm(" .text");
+ asm(" .align 16");
+ asm(" .global GC_save_regs_in_stack");
+ asm(" .proc GC_save_regs_in_stack");
+ asm("GC_save_regs_in_stack:");
+ asm(" .body");
+ asm(" flushrs");
+ asm(" ;;");
+ asm(" mov r8=ar.bsp");
+ asm(" br.ret.sptk.few rp");
+ asm(" .endp GC_save_regs_in_stack");
+# endif /* LINUX */
+# if 0 /* Other alternatives that don't work on HP/UX */
+ word GC_save_regs_in_stack() {
+# if USE_BUILTINS
+ __builtin_ia64_flushrs();
+ return __builtin_ia64_bsp();
+# else
+# ifdef HPUX
+ _asm(" flushrs");
+ _asm(" ;;");
+ _asm(" mov r8=ar.bsp");
+ _asm(" br.ret.sptk.few rp");
+# else
+ asm(" flushrs");
+ asm(" ;;");
+ asm(" mov r8=ar.bsp");
+ asm(" br.ret.sptk.few rp");
+# endif
+# endif
+ }
+# endif
+# endif
+
+/* GC_clear_stack_inner(arg, limit) clears stack area up to limit and */
+/* returns arg. Stack clearing is crucial on SPARC, so we supply */
+/* an assembly version that's more careful. Assumes limit is hotter */
+/* than sp, and limit is 8 byte aligned. */
+#if defined(ASM_CLEAR_CODE)
+#ifndef SPARC
+ --> fix it
+#endif
+# ifdef SUNOS4
+ asm(".globl _GC_clear_stack_inner");
+ asm("_GC_clear_stack_inner:");
+# else
+ asm(".globl GC_clear_stack_inner");
+ asm("GC_clear_stack_inner:");
+ asm(".type GC_save_regs_in_stack,#function");
+# endif
+#if defined(__arch64__) || defined(__sparcv9)
+ asm("mov %sp,%o2"); /* Save sp */
+ asm("add %sp,2047-8,%o3"); /* p = sp+bias-8 */
+ asm("add %o1,-2047-192,%sp"); /* Move sp out of the way, */
+ /* so that traps still work. */
+ /* Includes some extra words */
+ /* so we can be sloppy below. */
+ asm("loop:");
+ asm("stx %g0,[%o3]"); /* *(long *)p = 0 */
+ asm("cmp %o3,%o1");
+ asm("bgu,pt %xcc, loop"); /* if (p > limit) goto loop */
+ asm("add %o3,-8,%o3"); /* p -= 8 (delay slot) */
+ asm("retl");
+ asm("mov %o2,%sp"); /* Restore sp., delay slot */
+#else
+ asm("mov %sp,%o2"); /* Save sp */
+ asm("add %sp,-8,%o3"); /* p = sp-8 */
+ asm("clr %g1"); /* [g0,g1] = 0 */
+ asm("add %o1,-0x60,%sp"); /* Move sp out of the way, */
+ /* so that traps still work. */
+ /* Includes some extra words */
+ /* so we can be sloppy below. */
+ asm("loop:");
+ asm("std %g0,[%o3]"); /* *(long long *)p = 0 */
+ asm("cmp %o3,%o1");
+ asm("bgu loop "); /* if (p > limit) goto loop */
+ asm("add %o3,-8,%o3"); /* p -= 8 (delay slot) */
+ asm("retl");
+ asm("mov %o2,%sp"); /* Restore sp., delay slot */
+#endif /* old SPARC */
+ /* First argument = %o0 = return value */
+# ifdef SVR4
+ asm(" .GC_clear_stack_inner_end:");
+ asm(" .size GC_clear_stack_inner,.GC_clear_stack_inner_end-GC_clear_stack_inner");
+# endif
+
+# ifdef LINT
+ /*ARGSUSED*/
+ ptr_t GC_clear_stack_inner(arg, limit)
+ ptr_t arg; word limit;
+ { return(arg); }
+# endif
+#endif
Added: llvm-gcc-4.2/trunk/boehm-gc/malloc.c
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/malloc.c?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/malloc.c (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/malloc.c Thu Nov 8 16:56:19 2007
@@ -0,0 +1,502 @@
+/*
+ * Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
+ * Copyright (c) 1991-1994 by Xerox Corporation. All rights reserved.
+ * Copyright (c) 2000 by Hewlett-Packard Company. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ */
+/* Boehm, February 7, 1996 4:32 pm PST */
+
+#include <stdio.h>
+#include "private/gc_priv.h"
+
+extern ptr_t GC_clear_stack(); /* in misc.c, behaves like identity */
+void GC_extend_size_map(); /* in misc.c. */
+
+/* Allocate reclaim list for kind: */
+/* Return TRUE on success */
+GC_bool GC_alloc_reclaim_list(kind)
+register struct obj_kind * kind;
+{
+ struct hblk ** result = (struct hblk **)
+ GC_scratch_alloc((MAXOBJSZ+1) * sizeof(struct hblk *));
+ if (result == 0) return(FALSE);
+ BZERO(result, (MAXOBJSZ+1)*sizeof(struct hblk *));
+ kind -> ok_reclaim_list = result;
+ return(TRUE);
+}
+
+/* Allocate a large block of size lw words. */
+/* The block is not cleared. */
+/* Flags is 0 or IGNORE_OFF_PAGE. */
+/* We hold the allocation lock. */
+ptr_t GC_alloc_large(lw, k, flags)
+word lw;
+int k;
+unsigned flags;
+{
+ struct hblk * h;
+ word n_blocks = OBJ_SZ_TO_BLOCKS(lw);
+ ptr_t result;
+
+ if (!GC_is_initialized) GC_init_inner();
+ /* Do our share of marking work */
+ if(GC_incremental && !GC_dont_gc)
+ GC_collect_a_little_inner((int)n_blocks);
+ h = GC_allochblk(lw, k, flags);
+# ifdef USE_MUNMAP
+ if (0 == h) {
+ GC_merge_unmapped();
+ h = GC_allochblk(lw, k, flags);
+ }
+# endif
+ while (0 == h && GC_collect_or_expand(n_blocks, (flags != 0))) {
+ h = GC_allochblk(lw, k, flags);
+ }
+ if (h == 0) {
+ result = 0;
+ } else {
+ int total_bytes = n_blocks * HBLKSIZE;
+ if (n_blocks > 1) {
+ GC_large_allocd_bytes += total_bytes;
+ if (GC_large_allocd_bytes > GC_max_large_allocd_bytes)
+ GC_max_large_allocd_bytes = GC_large_allocd_bytes;
+ }
+ result = (ptr_t) (h -> hb_body);
+ GC_words_wasted += BYTES_TO_WORDS(total_bytes) - lw;
+ }
+ return result;
+}
+
+
+/* Allocate a large block of size lb bytes. Clear if appropriate. */
+/* We hold the allocation lock. */
+ptr_t GC_alloc_large_and_clear(lw, k, flags)
+word lw;
+int k;
+unsigned flags;
+{
+ ptr_t result = GC_alloc_large(lw, k, flags);
+ word n_blocks = OBJ_SZ_TO_BLOCKS(lw);
+
+ if (0 == result) return 0;
+ if (GC_debugging_started || GC_obj_kinds[k].ok_init) {
+ /* Clear the whole block, in case of GC_realloc call. */
+ BZERO(result, n_blocks * HBLKSIZE);
+ }
+ return result;
+}
+
+/* allocate lb bytes for an object of kind k. */
+/* Should not be used to directly to allocate */
+/* objects such as STUBBORN objects that */
+/* require special handling on allocation. */
+/* First a version that assumes we already */
+/* hold lock: */
+ptr_t GC_generic_malloc_inner(lb, k)
+register word lb;
+register int k;
+{
+register word lw;
+register ptr_t op;
+register ptr_t *opp;
+
+ if( SMALL_OBJ(lb) ) {
+ register struct obj_kind * kind = GC_obj_kinds + k;
+# ifdef MERGE_SIZES
+ lw = GC_size_map[lb];
+# else
+ lw = ALIGNED_WORDS(lb);
+ if (lw == 0) lw = MIN_WORDS;
+# endif
+ opp = &(kind -> ok_freelist[lw]);
+ if( (op = *opp) == 0 ) {
+# ifdef MERGE_SIZES
+ if (GC_size_map[lb] == 0) {
+ if (!GC_is_initialized) GC_init_inner();
+ if (GC_size_map[lb] == 0) GC_extend_size_map(lb);
+ return(GC_generic_malloc_inner(lb, k));
+ }
+# else
+ if (!GC_is_initialized) {
+ GC_init_inner();
+ return(GC_generic_malloc_inner(lb, k));
+ }
+# endif
+ if (kind -> ok_reclaim_list == 0) {
+ if (!GC_alloc_reclaim_list(kind)) goto out;
+ }
+ op = GC_allocobj(lw, k);
+ if (op == 0) goto out;
+ }
+ /* Here everything is in a consistent state. */
+ /* We assume the following assignment is */
+ /* atomic. If we get aborted */
+ /* after the assignment, we lose an object, */
+ /* but that's benign. */
+ /* Volatile declarations may need to be added */
+ /* to prevent the compiler from breaking things.*/
+ /* If we only execute the second of the */
+ /* following assignments, we lose the free */
+ /* list, but that should still be OK, at least */
+ /* for garbage collected memory. */
+ *opp = obj_link(op);
+ obj_link(op) = 0;
+ } else {
+ lw = ROUNDED_UP_WORDS(lb);
+ op = (ptr_t)GC_alloc_large_and_clear(lw, k, 0);
+ }
+ GC_words_allocd += lw;
+
+out:
+ return op;
+}
+
+/* Allocate a composite object of size n bytes. The caller guarantees */
+/* that pointers past the first page are not relevant. Caller holds */
+/* allocation lock. */
+ptr_t GC_generic_malloc_inner_ignore_off_page(lb, k)
+register size_t lb;
+register int k;
+{
+ register word lw;
+ ptr_t op;
+
+ if (lb <= HBLKSIZE)
+ return(GC_generic_malloc_inner((word)lb, k));
+ lw = ROUNDED_UP_WORDS(lb);
+ op = (ptr_t)GC_alloc_large_and_clear(lw, k, IGNORE_OFF_PAGE);
+ GC_words_allocd += lw;
+ return op;
+}
+
+ptr_t GC_generic_malloc(lb, k)
+register word lb;
+register int k;
+{
+ ptr_t result;
+ DCL_LOCK_STATE;
+
+ if (GC_have_errors) GC_print_all_errors();
+ GC_INVOKE_FINALIZERS();
+ if (SMALL_OBJ(lb)) {
+ DISABLE_SIGNALS();
+ LOCK();
+ result = GC_generic_malloc_inner((word)lb, k);
+ UNLOCK();
+ ENABLE_SIGNALS();
+ } else {
+ word lw;
+ word n_blocks;
+ GC_bool init;
+ lw = ROUNDED_UP_WORDS(lb);
+ n_blocks = OBJ_SZ_TO_BLOCKS(lw);
+ init = GC_obj_kinds[k].ok_init;
+ DISABLE_SIGNALS();
+ LOCK();
+ result = (ptr_t)GC_alloc_large(lw, k, 0);
+ if (0 != result) {
+ if (GC_debugging_started) {
+ BZERO(result, n_blocks * HBLKSIZE);
+ } else {
+# ifdef THREADS
+ /* Clear any memory that might be used for GC descriptors */
+ /* before we release the lock. */
+ ((word *)result)[0] = 0;
+ ((word *)result)[1] = 0;
+ ((word *)result)[lw-1] = 0;
+ ((word *)result)[lw-2] = 0;
+# endif
+ }
+ }
+ GC_words_allocd += lw;
+ UNLOCK();
+ ENABLE_SIGNALS();
+ if (init && !GC_debugging_started && 0 != result) {
+ BZERO(result, n_blocks * HBLKSIZE);
+ }
+ }
+ if (0 == result) {
+ return((*GC_oom_fn)(lb));
+ } else {
+ return(result);
+ }
+}
+
+
+#define GENERAL_MALLOC(lb,k) \
+ (GC_PTR)GC_clear_stack(GC_generic_malloc((word)lb, k))
+/* We make the GC_clear_stack_call a tail call, hoping to get more of */
+/* the stack. */
+
+/* Allocate lb bytes of atomic (pointerfree) data */
+# ifdef __STDC__
+ GC_PTR GC_malloc_atomic(size_t lb)
+# else
+ GC_PTR GC_malloc_atomic(lb)
+ size_t lb;
+# endif
+{
+register ptr_t op;
+register ptr_t * opp;
+register word lw;
+DCL_LOCK_STATE;
+
+ if( EXPECT(SMALL_OBJ(lb), 1) ) {
+# ifdef MERGE_SIZES
+ lw = GC_size_map[lb];
+# else
+ lw = ALIGNED_WORDS(lb);
+# endif
+ opp = &(GC_aobjfreelist[lw]);
+ FASTLOCK();
+ if( EXPECT(!FASTLOCK_SUCCEEDED() || (op = *opp) == 0, 0) ) {
+ FASTUNLOCK();
+ return(GENERAL_MALLOC((word)lb, PTRFREE));
+ }
+ /* See above comment on signals. */
+ *opp = obj_link(op);
+ GC_words_allocd += lw;
+ FASTUNLOCK();
+ return((GC_PTR) op);
+ } else {
+ return(GENERAL_MALLOC((word)lb, PTRFREE));
+ }
+}
+
+/* Allocate lb bytes of composite (pointerful) data */
+# ifdef __STDC__
+ GC_PTR GC_malloc(size_t lb)
+# else
+ GC_PTR GC_malloc(lb)
+ size_t lb;
+# endif
+{
+register ptr_t op;
+register ptr_t *opp;
+register word lw;
+DCL_LOCK_STATE;
+
+ if( EXPECT(SMALL_OBJ(lb), 1) ) {
+# ifdef MERGE_SIZES
+ lw = GC_size_map[lb];
+# else
+ lw = ALIGNED_WORDS(lb);
+# endif
+ opp = &(GC_objfreelist[lw]);
+ FASTLOCK();
+ if( EXPECT(!FASTLOCK_SUCCEEDED() || (op = *opp) == 0, 0) ) {
+ FASTUNLOCK();
+ return(GENERAL_MALLOC((word)lb, NORMAL));
+ }
+ /* See above comment on signals. */
+ GC_ASSERT(0 == obj_link(op)
+ || (word)obj_link(op)
+ <= (word)GC_greatest_plausible_heap_addr
+ && (word)obj_link(op)
+ >= (word)GC_least_plausible_heap_addr);
+ *opp = obj_link(op);
+ obj_link(op) = 0;
+ GC_words_allocd += lw;
+ FASTUNLOCK();
+ return((GC_PTR) op);
+ } else {
+ return(GENERAL_MALLOC((word)lb, NORMAL));
+ }
+}
+
+# ifdef REDIRECT_MALLOC
+
+/* Avoid unnecessary nested procedure calls here, by #defining some */
+/* malloc replacements. Otherwise we end up saving a */
+/* meaningless return address in the object. It also speeds things up, */
+/* but it is admittedly quite ugly. */
+# ifdef GC_ADD_CALLER
+# define RA GC_RETURN_ADDR,
+# else
+# define RA
+# endif
+# define GC_debug_malloc_replacement(lb) \
+ GC_debug_malloc(lb, RA "unknown", 0)
+
+# ifdef __STDC__
+ GC_PTR malloc(size_t lb)
+# else
+ GC_PTR malloc(lb)
+ size_t lb;
+# endif
+ {
+ /* It might help to manually inline the GC_malloc call here. */
+ /* But any decent compiler should reduce the extra procedure call */
+ /* to at most a jump instruction in this case. */
+# if defined(I386) && defined(GC_SOLARIS_THREADS)
+ /*
+ * Thread initialisation can call malloc before
+ * we're ready for it.
+ * It's not clear that this is enough to help matters.
+ * The thread implementation may well call malloc at other
+ * inopportune times.
+ */
+ if (!GC_is_initialized) return sbrk(lb);
+# endif /* I386 && GC_SOLARIS_THREADS */
+ return((GC_PTR)REDIRECT_MALLOC(lb));
+ }
+
+# ifdef __STDC__
+ GC_PTR calloc(size_t n, size_t lb)
+# else
+ GC_PTR calloc(n, lb)
+ size_t n, lb;
+# endif
+ {
+ return((GC_PTR)REDIRECT_MALLOC(n*lb));
+ }
+
+#ifndef strdup
+# include <string.h>
+# ifdef __STDC__
+ char *strdup(const char *s)
+# else
+ char *strdup(s)
+ char *s;
+# endif
+ {
+ size_t len = strlen(s) + 1;
+ char * result = ((char *)REDIRECT_MALLOC(len+1));
+ BCOPY(s, result, len+1);
+ return result;
+ }
+#endif /* !defined(strdup) */
+ /* If strdup is macro defined, we assume that it actually calls malloc, */
+ /* and thus the right thing will happen even without overriding it. */
+ /* This seems to be true on most Linux systems. */
+
+#undef GC_debug_malloc_replacement
+
+# endif /* REDIRECT_MALLOC */
+
+/* Explicitly deallocate an object p. */
+# ifdef __STDC__
+ void GC_free(GC_PTR p)
+# else
+ void GC_free(p)
+ GC_PTR p;
+# endif
+{
+ register struct hblk *h;
+ register hdr *hhdr;
+ register signed_word sz;
+ register ptr_t * flh;
+ register int knd;
+ register struct obj_kind * ok;
+ DCL_LOCK_STATE;
+
+ if (p == 0) return;
+ /* Required by ANSI. It's not my fault ... */
+ h = HBLKPTR(p);
+ hhdr = HDR(h);
+ GC_ASSERT(GC_base(p) == p);
+# if defined(REDIRECT_MALLOC) && \
+ (defined(GC_SOLARIS_THREADS) || defined(GC_LINUX_THREADS) \
+ || defined(__MINGW32__)) /* Should this be MSWIN32 in general? */
+ /* For Solaris, we have to redirect malloc calls during */
+ /* initialization. For the others, this seems to happen */
+ /* implicitly. */
+ /* Don't try to deallocate that memory. */
+ if (0 == hhdr) return;
+# endif
+ knd = hhdr -> hb_obj_kind;
+ sz = hhdr -> hb_sz;
+ ok = &GC_obj_kinds[knd];
+ if (EXPECT((sz <= MAXOBJSZ), 1)) {
+# ifdef THREADS
+ DISABLE_SIGNALS();
+ LOCK();
+# endif
+ GC_mem_freed += sz;
+ /* A signal here can make GC_mem_freed and GC_non_gc_bytes */
+ /* inconsistent. We claim this is benign. */
+ if (IS_UNCOLLECTABLE(knd)) GC_non_gc_bytes -= WORDS_TO_BYTES(sz);
+ /* Its unnecessary to clear the mark bit. If the */
+ /* object is reallocated, it doesn't matter. O.w. the */
+ /* collector will do it, since it's on a free list. */
+ if (ok -> ok_init) {
+ BZERO((word *)p + 1, WORDS_TO_BYTES(sz-1));
+ }
+ flh = &(ok -> ok_freelist[sz]);
+ obj_link(p) = *flh;
+ *flh = (ptr_t)p;
+# ifdef THREADS
+ UNLOCK();
+ ENABLE_SIGNALS();
+# endif
+ } else {
+ DISABLE_SIGNALS();
+ LOCK();
+ GC_mem_freed += sz;
+ if (IS_UNCOLLECTABLE(knd)) GC_non_gc_bytes -= WORDS_TO_BYTES(sz);
+ GC_freehblk(h);
+ UNLOCK();
+ ENABLE_SIGNALS();
+ }
+}
+
+/* Explicitly deallocate an object p when we already hold lock. */
+/* Only used for internally allocated objects, so we can take some */
+/* shortcuts. */
+#ifdef THREADS
+void GC_free_inner(GC_PTR p)
+{
+ register struct hblk *h;
+ register hdr *hhdr;
+ register signed_word sz;
+ register ptr_t * flh;
+ register int knd;
+ register struct obj_kind * ok;
+ DCL_LOCK_STATE;
+
+ h = HBLKPTR(p);
+ hhdr = HDR(h);
+ knd = hhdr -> hb_obj_kind;
+ sz = hhdr -> hb_sz;
+ ok = &GC_obj_kinds[knd];
+ if (sz <= MAXOBJSZ) {
+ GC_mem_freed += sz;
+ if (IS_UNCOLLECTABLE(knd)) GC_non_gc_bytes -= WORDS_TO_BYTES(sz);
+ if (ok -> ok_init) {
+ BZERO((word *)p + 1, WORDS_TO_BYTES(sz-1));
+ }
+ flh = &(ok -> ok_freelist[sz]);
+ obj_link(p) = *flh;
+ *flh = (ptr_t)p;
+ } else {
+ GC_mem_freed += sz;
+ if (IS_UNCOLLECTABLE(knd)) GC_non_gc_bytes -= WORDS_TO_BYTES(sz);
+ GC_freehblk(h);
+ }
+}
+#endif /* THREADS */
+
+# if defined(REDIRECT_MALLOC) && !defined(REDIRECT_FREE)
+# define REDIRECT_FREE GC_free
+# endif
+# ifdef REDIRECT_FREE
+# ifdef __STDC__
+ void free(GC_PTR p)
+# else
+ void free(p)
+ GC_PTR p;
+# endif
+ {
+# ifndef IGNORE_FREE
+ REDIRECT_FREE(p);
+# endif
+ }
+# endif /* REDIRECT_MALLOC */
Added: llvm-gcc-4.2/trunk/boehm-gc/mallocx.c
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/mallocx.c?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/mallocx.c (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/mallocx.c Thu Nov 8 16:56:19 2007
@@ -0,0 +1,695 @@
+/*
+ * Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
+ * Copyright (c) 1991-1994 by Xerox Corporation. All rights reserved.
+ * Copyright (c) 1996 by Silicon Graphics. All rights reserved.
+ * Copyright (c) 2000 by Hewlett-Packard Company. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ */
+
+/*
+ * These are extra allocation routines which are likely to be less
+ * frequently used than those in malloc.c. They are separate in the
+ * hope that the .o file will be excluded from statically linked
+ * executables. We should probably break this up further.
+ */
+
+#include <stdio.h>
+#include "private/gc_priv.h"
+
+extern ptr_t GC_clear_stack(); /* in misc.c, behaves like identity */
+void GC_extend_size_map(); /* in misc.c. */
+GC_bool GC_alloc_reclaim_list(); /* in malloc.c */
+
+/* Some externally visible but unadvertised variables to allow access to */
+/* free lists from inlined allocators without including gc_priv.h */
+/* or introducing dependencies on internal data structure layouts. */
+ptr_t * GC_CONST GC_objfreelist_ptr = GC_objfreelist;
+ptr_t * GC_CONST GC_aobjfreelist_ptr = GC_aobjfreelist;
+ptr_t * GC_CONST GC_uobjfreelist_ptr = GC_uobjfreelist;
+# ifdef ATOMIC_UNCOLLECTABLE
+ ptr_t * GC_CONST GC_auobjfreelist_ptr = GC_auobjfreelist;
+# endif
+
+
+GC_PTR GC_generic_or_special_malloc(lb,knd)
+word lb;
+int knd;
+{
+ switch(knd) {
+# ifdef STUBBORN_ALLOC
+ case STUBBORN:
+ return(GC_malloc_stubborn((size_t)lb));
+# endif
+ case PTRFREE:
+ return(GC_malloc_atomic((size_t)lb));
+ case NORMAL:
+ return(GC_malloc((size_t)lb));
+ case UNCOLLECTABLE:
+ return(GC_malloc_uncollectable((size_t)lb));
+# ifdef ATOMIC_UNCOLLECTABLE
+ case AUNCOLLECTABLE:
+ return(GC_malloc_atomic_uncollectable((size_t)lb));
+# endif /* ATOMIC_UNCOLLECTABLE */
+ default:
+ return(GC_generic_malloc(lb,knd));
+ }
+}
+
+
+/* Change the size of the block pointed to by p to contain at least */
+/* lb bytes. The object may be (and quite likely will be) moved. */
+/* The kind (e.g. atomic) is the same as that of the old. */
+/* Shrinking of large blocks is not implemented well. */
+# ifdef __STDC__
+ GC_PTR GC_realloc(GC_PTR p, size_t lb)
+# else
+ GC_PTR GC_realloc(p,lb)
+ GC_PTR p;
+ size_t lb;
+# endif
+{
+register struct hblk * h;
+register hdr * hhdr;
+register word sz; /* Current size in bytes */
+register word orig_sz; /* Original sz in bytes */
+int obj_kind;
+
+ if (p == 0) return(GC_malloc(lb)); /* Required by ANSI */
+ h = HBLKPTR(p);
+ hhdr = HDR(h);
+ sz = hhdr -> hb_sz;
+ obj_kind = hhdr -> hb_obj_kind;
+ sz = WORDS_TO_BYTES(sz);
+ orig_sz = sz;
+
+ if (sz > MAXOBJBYTES) {
+ /* Round it up to the next whole heap block */
+ register word descr;
+
+ sz = (sz+HBLKSIZE-1) & (~HBLKMASK);
+ hhdr -> hb_sz = BYTES_TO_WORDS(sz);
+ descr = GC_obj_kinds[obj_kind].ok_descriptor;
+ if (GC_obj_kinds[obj_kind].ok_relocate_descr) descr += sz;
+ hhdr -> hb_descr = descr;
+ if (IS_UNCOLLECTABLE(obj_kind)) GC_non_gc_bytes += (sz - orig_sz);
+ /* Extra area is already cleared by GC_alloc_large_and_clear. */
+ }
+ if (ADD_SLOP(lb) <= sz) {
+ if (lb >= (sz >> 1)) {
+# ifdef STUBBORN_ALLOC
+ if (obj_kind == STUBBORN) GC_change_stubborn(p);
+# endif
+ if (orig_sz > lb) {
+ /* Clear unneeded part of object to avoid bogus pointer */
+ /* tracing. */
+ /* Safe for stubborn objects. */
+ BZERO(((ptr_t)p) + lb, orig_sz - lb);
+ }
+ return(p);
+ } else {
+ /* shrink */
+ GC_PTR result =
+ GC_generic_or_special_malloc((word)lb, obj_kind);
+
+ if (result == 0) return(0);
+ /* Could also return original object. But this */
+ /* gives the client warning of imminent disaster. */
+ BCOPY(p, result, lb);
+# ifndef IGNORE_FREE
+ GC_free(p);
+# endif
+ return(result);
+ }
+ } else {
+ /* grow */
+ GC_PTR result =
+ GC_generic_or_special_malloc((word)lb, obj_kind);
+
+ if (result == 0) return(0);
+ BCOPY(p, result, sz);
+# ifndef IGNORE_FREE
+ GC_free(p);
+# endif
+ return(result);
+ }
+}
+
+# if defined(REDIRECT_MALLOC) && !defined(REDIRECT_REALLOC)
+# define REDIRECT_REALLOC GC_realloc
+# endif
+
+# ifdef REDIRECT_REALLOC
+
+/* As with malloc, avoid two levels of extra calls here. */
+# ifdef GC_ADD_CALLER
+# define RA GC_RETURN_ADDR,
+# else
+# define RA
+# endif
+# define GC_debug_realloc_replacement(p, lb) \
+ GC_debug_realloc(p, lb, RA "unknown", 0)
+
+# ifdef __STDC__
+ GC_PTR realloc(GC_PTR p, size_t lb)
+# else
+ GC_PTR realloc(p,lb)
+ GC_PTR p;
+ size_t lb;
+# endif
+ {
+ return(REDIRECT_REALLOC(p, lb));
+ }
+
+# undef GC_debug_realloc_replacement
+# endif /* REDIRECT_REALLOC */
+
+
+/* Allocate memory such that only pointers to near the */
+/* beginning of the object are considered. */
+/* We avoid holding allocation lock while we clear memory. */
+ptr_t GC_generic_malloc_ignore_off_page(lb, k)
+register size_t lb;
+register int k;
+{
+ register ptr_t result;
+ word lw;
+ word n_blocks;
+ GC_bool init;
+ DCL_LOCK_STATE;
+
+ if (SMALL_OBJ(lb))
+ return(GC_generic_malloc((word)lb, k));
+ lw = ROUNDED_UP_WORDS(lb);
+ n_blocks = OBJ_SZ_TO_BLOCKS(lw);
+ init = GC_obj_kinds[k].ok_init;
+ if (GC_have_errors) GC_print_all_errors();
+ GC_INVOKE_FINALIZERS();
+ DISABLE_SIGNALS();
+ LOCK();
+ result = (ptr_t)GC_alloc_large(lw, k, IGNORE_OFF_PAGE);
+ if (0 != result) {
+ if (GC_debugging_started) {
+ BZERO(result, n_blocks * HBLKSIZE);
+ } else {
+# ifdef THREADS
+ /* Clear any memory that might be used for GC descriptors */
+ /* before we release the lock. */
+ ((word *)result)[0] = 0;
+ ((word *)result)[1] = 0;
+ ((word *)result)[lw-1] = 0;
+ ((word *)result)[lw-2] = 0;
+# endif
+ }
+ }
+ GC_words_allocd += lw;
+ UNLOCK();
+ ENABLE_SIGNALS();
+ if (0 == result) {
+ return((*GC_oom_fn)(lb));
+ } else {
+ if (init && !GC_debugging_started) {
+ BZERO(result, n_blocks * HBLKSIZE);
+ }
+ return(result);
+ }
+}
+
+# if defined(__STDC__) || defined(__cplusplus)
+ void * GC_malloc_ignore_off_page(size_t lb)
+# else
+ char * GC_malloc_ignore_off_page(lb)
+ register size_t lb;
+# endif
+{
+ return((GC_PTR)GC_generic_malloc_ignore_off_page(lb, NORMAL));
+}
+
+# if defined(__STDC__) || defined(__cplusplus)
+ void * GC_malloc_atomic_ignore_off_page(size_t lb)
+# else
+ char * GC_malloc_atomic_ignore_off_page(lb)
+ register size_t lb;
+# endif
+{
+ return((GC_PTR)GC_generic_malloc_ignore_off_page(lb, PTRFREE));
+}
+
+/* Increment GC_words_allocd from code that doesn't have direct access */
+/* to GC_arrays. */
+# ifdef __STDC__
+void GC_incr_words_allocd(size_t n)
+{
+ GC_words_allocd += n;
+}
+
+/* The same for GC_mem_freed. */
+void GC_incr_mem_freed(size_t n)
+{
+ GC_mem_freed += n;
+}
+# endif /* __STDC__ */
+
+/* Analogous to the above, but assumes a small object size, and */
+/* bypasses MERGE_SIZES mechanism. Used by gc_inline.h. */
+ptr_t GC_generic_malloc_words_small_inner(lw, k)
+register word lw;
+register int k;
+{
+register ptr_t op;
+register ptr_t *opp;
+register struct obj_kind * kind = GC_obj_kinds + k;
+
+ opp = &(kind -> ok_freelist[lw]);
+ if( (op = *opp) == 0 ) {
+ if (!GC_is_initialized) {
+ GC_init_inner();
+ }
+ if (kind -> ok_reclaim_list != 0 || GC_alloc_reclaim_list(kind)) {
+ op = GC_clear_stack(GC_allocobj((word)lw, k));
+ }
+ if (op == 0) {
+ UNLOCK();
+ ENABLE_SIGNALS();
+ return ((*GC_oom_fn)(WORDS_TO_BYTES(lw)));
+ }
+ }
+ *opp = obj_link(op);
+ obj_link(op) = 0;
+ GC_words_allocd += lw;
+ return((ptr_t)op);
+}
+
+/* Analogous to the above, but assumes a small object size, and */
+/* bypasses MERGE_SIZES mechanism. Used by gc_inline.h. */
+#ifdef __STDC__
+ ptr_t GC_generic_malloc_words_small(size_t lw, int k)
+#else
+ ptr_t GC_generic_malloc_words_small(lw, k)
+ register word lw;
+ register int k;
+#endif
+{
+register ptr_t op;
+DCL_LOCK_STATE;
+
+ if (GC_have_errors) GC_print_all_errors();
+ GC_INVOKE_FINALIZERS();
+ DISABLE_SIGNALS();
+ LOCK();
+ op = GC_generic_malloc_words_small_inner(lw, k);
+ UNLOCK();
+ ENABLE_SIGNALS();
+ return((ptr_t)op);
+}
+
+#if defined(THREADS) && !defined(SRC_M3)
+
+extern signed_word GC_mem_found; /* Protected by GC lock. */
+
+#ifdef PARALLEL_MARK
+volatile signed_word GC_words_allocd_tmp = 0;
+ /* Number of words of memory allocated since */
+ /* we released the GC lock. Instead of */
+ /* reacquiring the GC lock just to add this in, */
+ /* we add it in the next time we reacquire */
+ /* the lock. (Atomically adding it doesn't */
+ /* work, since we would have to atomically */
+ /* update it in GC_malloc, which is too */
+ /* expensive. */
+#endif /* PARALLEL_MARK */
+
+/* See reclaim.c: */
+extern ptr_t GC_reclaim_generic();
+
+/* Return a list of 1 or more objects of the indicated size, linked */
+/* through the first word in the object. This has the advantage that */
+/* it acquires the allocation lock only once, and may greatly reduce */
+/* time wasted contending for the allocation lock. Typical usage would */
+/* be in a thread that requires many items of the same size. It would */
+/* keep its own free list in thread-local storage, and call */
+/* GC_malloc_many or friends to replenish it. (We do not round up */
+/* object sizes, since a call indicates the intention to consume many */
+/* objects of exactly this size.) */
+/* We return the free-list by assigning it to *result, since it is */
+/* not safe to return, e.g. a linked list of pointer-free objects, */
+/* since the collector would not retain the entire list if it were */
+/* invoked just as we were returning. */
+/* Note that the client should usually clear the link field. */
+void GC_generic_malloc_many(lb, k, result)
+register word lb;
+register int k;
+ptr_t *result;
+{
+ptr_t op;
+ptr_t p;
+ptr_t *opp;
+word lw;
+word my_words_allocd = 0;
+struct obj_kind * ok = &(GC_obj_kinds[k]);
+DCL_LOCK_STATE;
+
+# if defined(GATHERSTATS) || defined(PARALLEL_MARK)
+# define COUNT_ARG , &my_words_allocd
+# else
+# define COUNT_ARG
+# define NEED_TO_COUNT
+# endif
+ if (!SMALL_OBJ(lb)) {
+ op = GC_generic_malloc(lb, k);
+ if(0 != op) obj_link(op) = 0;
+ *result = op;
+ return;
+ }
+ lw = ALIGNED_WORDS(lb);
+ if (GC_have_errors) GC_print_all_errors();
+ GC_INVOKE_FINALIZERS();
+ DISABLE_SIGNALS();
+ LOCK();
+ if (!GC_is_initialized) GC_init_inner();
+ /* Do our share of marking work */
+ if (GC_incremental && !GC_dont_gc) {
+ ENTER_GC();
+ GC_collect_a_little_inner(1);
+ EXIT_GC();
+ }
+ /* First see if we can reclaim a page of objects waiting to be */
+ /* reclaimed. */
+ {
+ struct hblk ** rlh = ok -> ok_reclaim_list;
+ struct hblk * hbp;
+ hdr * hhdr;
+
+ rlh += lw;
+ while ((hbp = *rlh) != 0) {
+ hhdr = HDR(hbp);
+ *rlh = hhdr -> hb_next;
+ hhdr -> hb_last_reclaimed = (unsigned short) GC_gc_no;
+# ifdef PARALLEL_MARK
+ {
+ signed_word my_words_allocd_tmp = GC_words_allocd_tmp;
+
+ GC_ASSERT(my_words_allocd_tmp >= 0);
+ /* We only decrement it while holding the GC lock. */
+ /* Thus we can't accidentally adjust it down in more */
+ /* than one thread simultaneously. */
+ if (my_words_allocd_tmp != 0) {
+ (void)GC_atomic_add(
+ (volatile GC_word *)(&GC_words_allocd_tmp),
+ (GC_word)(-my_words_allocd_tmp));
+ GC_words_allocd += my_words_allocd_tmp;
+ }
+ }
+ GC_acquire_mark_lock();
+ ++ GC_fl_builder_count;
+ UNLOCK();
+ ENABLE_SIGNALS();
+ GC_release_mark_lock();
+# endif
+ op = GC_reclaim_generic(hbp, hhdr, lw,
+ ok -> ok_init, 0 COUNT_ARG);
+ if (op != 0) {
+# ifdef NEED_TO_COUNT
+ /* We are neither gathering statistics, nor marking in */
+ /* parallel. Thus GC_reclaim_generic doesn't count */
+ /* for us. */
+ for (p = op; p != 0; p = obj_link(p)) {
+ my_words_allocd += lw;
+ }
+# endif
+# if defined(GATHERSTATS)
+ /* We also reclaimed memory, so we need to adjust */
+ /* that count. */
+ /* This should be atomic, so the results may be */
+ /* inaccurate. */
+ GC_mem_found += my_words_allocd;
+# endif
+# ifdef PARALLEL_MARK
+ *result = op;
+ (void)GC_atomic_add(
+ (volatile GC_word *)(&GC_words_allocd_tmp),
+ (GC_word)(my_words_allocd));
+ GC_acquire_mark_lock();
+ -- GC_fl_builder_count;
+ if (GC_fl_builder_count == 0) GC_notify_all_builder();
+ GC_release_mark_lock();
+ (void) GC_clear_stack(0);
+ return;
+# else
+ GC_words_allocd += my_words_allocd;
+ goto out;
+# endif
+ }
+# ifdef PARALLEL_MARK
+ GC_acquire_mark_lock();
+ -- GC_fl_builder_count;
+ if (GC_fl_builder_count == 0) GC_notify_all_builder();
+ GC_release_mark_lock();
+ DISABLE_SIGNALS();
+ LOCK();
+ /* GC lock is needed for reclaim list access. We */
+ /* must decrement fl_builder_count before reaquiring GC */
+ /* lock. Hopefully this path is rare. */
+# endif
+ }
+ }
+ /* Next try to use prefix of global free list if there is one. */
+ /* We don't refill it, but we need to use it up before allocating */
+ /* a new block ourselves. */
+ opp = &(GC_obj_kinds[k].ok_freelist[lw]);
+ if ( (op = *opp) != 0 ) {
+ *opp = 0;
+ my_words_allocd = 0;
+ for (p = op; p != 0; p = obj_link(p)) {
+ my_words_allocd += lw;
+ if (my_words_allocd >= BODY_SZ) {
+ *opp = obj_link(p);
+ obj_link(p) = 0;
+ break;
+ }
+ }
+ GC_words_allocd += my_words_allocd;
+ goto out;
+ }
+ /* Next try to allocate a new block worth of objects of this size. */
+ {
+ struct hblk *h = GC_allochblk(lw, k, 0);
+ if (h != 0) {
+ if (IS_UNCOLLECTABLE(k)) GC_set_hdr_marks(HDR(h));
+ GC_words_allocd += BYTES_TO_WORDS(HBLKSIZE)
+ - BYTES_TO_WORDS(HBLKSIZE) % lw;
+# ifdef PARALLEL_MARK
+ GC_acquire_mark_lock();
+ ++ GC_fl_builder_count;
+ UNLOCK();
+ ENABLE_SIGNALS();
+ GC_release_mark_lock();
+# endif
+
+ op = GC_build_fl(h, lw, ok -> ok_init, 0);
+# ifdef PARALLEL_MARK
+ *result = op;
+ GC_acquire_mark_lock();
+ -- GC_fl_builder_count;
+ if (GC_fl_builder_count == 0) GC_notify_all_builder();
+ GC_release_mark_lock();
+ (void) GC_clear_stack(0);
+ return;
+# else
+ goto out;
+# endif
+ }
+ }
+
+ /* As a last attempt, try allocating a single object. Note that */
+ /* this may trigger a collection or expand the heap. */
+ op = GC_generic_malloc_inner(lb, k);
+ if (0 != op) obj_link(op) = 0;
+
+ out:
+ *result = op;
+ UNLOCK();
+ ENABLE_SIGNALS();
+ (void) GC_clear_stack(0);
+}
+
+GC_PTR GC_malloc_many(size_t lb)
+{
+ ptr_t result;
+ GC_generic_malloc_many(lb, NORMAL, &result);
+ return result;
+}
+
+/* Note that the "atomic" version of this would be unsafe, since the */
+/* links would not be seen by the collector. */
+# endif
+
+/* Allocate lb bytes of pointerful, traced, but not collectable data */
+# ifdef __STDC__
+ GC_PTR GC_malloc_uncollectable(size_t lb)
+# else
+ GC_PTR GC_malloc_uncollectable(lb)
+ size_t lb;
+# endif
+{
+register ptr_t op;
+register ptr_t *opp;
+register word lw;
+DCL_LOCK_STATE;
+
+ if( SMALL_OBJ(lb) ) {
+# ifdef MERGE_SIZES
+ if (EXTRA_BYTES != 0 && lb != 0) lb--;
+ /* We don't need the extra byte, since this won't be */
+ /* collected anyway. */
+ lw = GC_size_map[lb];
+# else
+ lw = ALIGNED_WORDS(lb);
+# endif
+ opp = &(GC_uobjfreelist[lw]);
+ FASTLOCK();
+ if( FASTLOCK_SUCCEEDED() && (op = *opp) != 0 ) {
+ /* See above comment on signals. */
+ *opp = obj_link(op);
+ obj_link(op) = 0;
+ GC_words_allocd += lw;
+ /* Mark bit ws already set on free list. It will be */
+ /* cleared only temporarily during a collection, as a */
+ /* result of the normal free list mark bit clearing. */
+ GC_non_gc_bytes += WORDS_TO_BYTES(lw);
+ FASTUNLOCK();
+ return((GC_PTR) op);
+ }
+ FASTUNLOCK();
+ op = (ptr_t)GC_generic_malloc((word)lb, UNCOLLECTABLE);
+ } else {
+ op = (ptr_t)GC_generic_malloc((word)lb, UNCOLLECTABLE);
+ }
+ if (0 == op) return(0);
+ /* We don't need the lock here, since we have an undisguised */
+ /* pointer. We do need to hold the lock while we adjust */
+ /* mark bits. */
+ {
+ register struct hblk * h;
+
+ h = HBLKPTR(op);
+ lw = HDR(h) -> hb_sz;
+
+ DISABLE_SIGNALS();
+ LOCK();
+ GC_set_mark_bit(op);
+ GC_non_gc_bytes += WORDS_TO_BYTES(lw);
+ UNLOCK();
+ ENABLE_SIGNALS();
+ return((GC_PTR) op);
+ }
+}
+
+#ifdef __STDC__
+/* Not well tested nor integrated. */
+/* Debug version is tricky and currently missing. */
+#include <limits.h>
+
+GC_PTR GC_memalign(size_t align, size_t lb)
+{
+ size_t new_lb;
+ size_t offset;
+ ptr_t result;
+
+# ifdef ALIGN_DOUBLE
+ if (align <= WORDS_TO_BYTES(2) && lb > align) return GC_malloc(lb);
+# endif
+ if (align <= WORDS_TO_BYTES(1)) return GC_malloc(lb);
+ if (align >= HBLKSIZE/2 || lb >= HBLKSIZE/2) {
+ if (align > HBLKSIZE) return GC_oom_fn(LONG_MAX-1024) /* Fail */;
+ return GC_malloc(lb <= HBLKSIZE? HBLKSIZE : lb);
+ /* Will be HBLKSIZE aligned. */
+ }
+ /* We could also try to make sure that the real rounded-up object size */
+ /* is a multiple of align. That would be correct up to HBLKSIZE. */
+ new_lb = lb + align - 1;
+ result = GC_malloc(new_lb);
+ offset = (word)result % align;
+ if (offset != 0) {
+ offset = align - offset;
+ if (!GC_all_interior_pointers) {
+ if (offset >= VALID_OFFSET_SZ) return GC_malloc(HBLKSIZE);
+ GC_register_displacement(offset);
+ }
+ }
+ result = (GC_PTR) ((ptr_t)result + offset);
+ GC_ASSERT((word)result % align == 0);
+ return result;
+}
+#endif
+
+# ifdef ATOMIC_UNCOLLECTABLE
+/* Allocate lb bytes of pointerfree, untraced, uncollectable data */
+/* This is normally roughly equivalent to the system malloc. */
+/* But it may be useful if malloc is redefined. */
+# ifdef __STDC__
+ GC_PTR GC_malloc_atomic_uncollectable(size_t lb)
+# else
+ GC_PTR GC_malloc_atomic_uncollectable(lb)
+ size_t lb;
+# endif
+{
+register ptr_t op;
+register ptr_t *opp;
+register word lw;
+DCL_LOCK_STATE;
+
+ if( SMALL_OBJ(lb) ) {
+# ifdef MERGE_SIZES
+ if (EXTRA_BYTES != 0 && lb != 0) lb--;
+ /* We don't need the extra byte, since this won't be */
+ /* collected anyway. */
+ lw = GC_size_map[lb];
+# else
+ lw = ALIGNED_WORDS(lb);
+# endif
+ opp = &(GC_auobjfreelist[lw]);
+ FASTLOCK();
+ if( FASTLOCK_SUCCEEDED() && (op = *opp) != 0 ) {
+ /* See above comment on signals. */
+ *opp = obj_link(op);
+ obj_link(op) = 0;
+ GC_words_allocd += lw;
+ /* Mark bit was already set while object was on free list. */
+ GC_non_gc_bytes += WORDS_TO_BYTES(lw);
+ FASTUNLOCK();
+ return((GC_PTR) op);
+ }
+ FASTUNLOCK();
+ op = (ptr_t)GC_generic_malloc((word)lb, AUNCOLLECTABLE);
+ } else {
+ op = (ptr_t)GC_generic_malloc((word)lb, AUNCOLLECTABLE);
+ }
+ if (0 == op) return(0);
+ /* We don't need the lock here, since we have an undisguised */
+ /* pointer. We do need to hold the lock while we adjust */
+ /* mark bits. */
+ {
+ register struct hblk * h;
+
+ h = HBLKPTR(op);
+ lw = HDR(h) -> hb_sz;
+
+ DISABLE_SIGNALS();
+ LOCK();
+ GC_set_mark_bit(op);
+ GC_non_gc_bytes += WORDS_TO_BYTES(lw);
+ UNLOCK();
+ ENABLE_SIGNALS();
+ return((GC_PTR) op);
+ }
+}
+
+#endif /* ATOMIC_UNCOLLECTABLE */
Added: llvm-gcc-4.2/trunk/boehm-gc/mark.c
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/mark.c?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/mark.c (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/mark.c Thu Nov 8 16:56:19 2007
@@ -0,0 +1,1817 @@
+
+/*
+ * Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
+ * Copyright (c) 1991-1995 by Xerox Corporation. All rights reserved.
+ * Copyright (c) 2000 by Hewlett-Packard Company. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ *
+ */
+
+
+# include <stdio.h>
+# include "private/gc_pmark.h"
+
+#if defined(MSWIN32) && defined(__GNUC__)
+# include <excpt.h>
+#endif
+
+/* We put this here to minimize the risk of inlining. */
+/*VARARGS*/
+#ifdef __WATCOMC__
+ void GC_noop(void *p, ...) {}
+#else
+ void GC_noop() {}
+#endif
+
+/* Single argument version, robust against whole program analysis. */
+void GC_noop1(x)
+word x;
+{
+ static VOLATILE word sink;
+
+ sink = x;
+}
+
+/* mark_proc GC_mark_procs[MAX_MARK_PROCS] = {0} -- declared in gc_priv.h */
+
+word GC_n_mark_procs = GC_RESERVED_MARK_PROCS;
+
+/* Initialize GC_obj_kinds properly and standard free lists properly. */
+/* This must be done statically since they may be accessed before */
+/* GC_init is called. */
+/* It's done here, since we need to deal with mark descriptors. */
+struct obj_kind GC_obj_kinds[MAXOBJKINDS] = {
+/* PTRFREE */ { &GC_aobjfreelist[0], 0 /* filled in dynamically */,
+ 0 | GC_DS_LENGTH, FALSE, FALSE },
+/* NORMAL */ { &GC_objfreelist[0], 0,
+ 0 | GC_DS_LENGTH, /* Adjusted in GC_init_inner for EXTRA_BYTES */
+ TRUE /* add length to descr */, TRUE },
+/* UNCOLLECTABLE */
+ { &GC_uobjfreelist[0], 0,
+ 0 | GC_DS_LENGTH, TRUE /* add length to descr */, TRUE },
+# ifdef ATOMIC_UNCOLLECTABLE
+ /* AUNCOLLECTABLE */
+ { &GC_auobjfreelist[0], 0,
+ 0 | GC_DS_LENGTH, FALSE /* add length to descr */, FALSE },
+# endif
+# ifdef STUBBORN_ALLOC
+/*STUBBORN*/ { &GC_sobjfreelist[0], 0,
+ 0 | GC_DS_LENGTH, TRUE /* add length to descr */, TRUE },
+# endif
+};
+
+# ifdef ATOMIC_UNCOLLECTABLE
+# ifdef STUBBORN_ALLOC
+ int GC_n_kinds = 5;
+# else
+ int GC_n_kinds = 4;
+# endif
+# else
+# ifdef STUBBORN_ALLOC
+ int GC_n_kinds = 4;
+# else
+ int GC_n_kinds = 3;
+# endif
+# endif
+
+
+# ifndef INITIAL_MARK_STACK_SIZE
+# define INITIAL_MARK_STACK_SIZE (1*HBLKSIZE)
+ /* INITIAL_MARK_STACK_SIZE * sizeof(mse) should be a */
+ /* multiple of HBLKSIZE. */
+ /* The incremental collector actually likes a larger */
+ /* size, since it want to push all marked dirty objs */
+ /* before marking anything new. Currently we let it */
+ /* grow dynamically. */
+# endif
+
+/*
+ * Limits of stack for GC_mark routine.
+ * All ranges between GC_mark_stack(incl.) and GC_mark_stack_top(incl.) still
+ * need to be marked from.
+ */
+
+word GC_n_rescuing_pages; /* Number of dirty pages we marked from */
+ /* excludes ptrfree pages, etc. */
+
+mse * GC_mark_stack;
+
+mse * GC_mark_stack_limit;
+
+word GC_mark_stack_size = 0;
+
+#ifdef PARALLEL_MARK
+ mse * VOLATILE GC_mark_stack_top;
+#else
+ mse * GC_mark_stack_top;
+#endif
+
+static struct hblk * scan_ptr;
+
+mark_state_t GC_mark_state = MS_NONE;
+
+GC_bool GC_mark_stack_too_small = FALSE;
+
+GC_bool GC_objects_are_marked = FALSE; /* Are there collectable marked */
+ /* objects in the heap? */
+
+/* Is a collection in progress? Note that this can return true in the */
+/* nonincremental case, if a collection has been abandoned and the */
+/* mark state is now MS_INVALID. */
+GC_bool GC_collection_in_progress()
+{
+ return(GC_mark_state != MS_NONE);
+}
+
+/* clear all mark bits in the header */
+void GC_clear_hdr_marks(hhdr)
+register hdr * hhdr;
+{
+# ifdef USE_MARK_BYTES
+ BZERO(hhdr -> hb_marks, MARK_BITS_SZ);
+# else
+ BZERO(hhdr -> hb_marks, MARK_BITS_SZ*sizeof(word));
+# endif
+}
+
+/* Set all mark bits in the header. Used for uncollectable blocks. */
+void GC_set_hdr_marks(hhdr)
+register hdr * hhdr;
+{
+ register int i;
+
+ for (i = 0; i < MARK_BITS_SZ; ++i) {
+# ifdef USE_MARK_BYTES
+ hhdr -> hb_marks[i] = 1;
+# else
+ hhdr -> hb_marks[i] = ONES;
+# endif
+ }
+}
+
+/*
+ * Clear all mark bits associated with block h.
+ */
+/*ARGSUSED*/
+# if defined(__STDC__) || defined(__cplusplus)
+ static void clear_marks_for_block(struct hblk *h, word dummy)
+# else
+ static void clear_marks_for_block(h, dummy)
+ struct hblk *h;
+ word dummy;
+# endif
+{
+ register hdr * hhdr = HDR(h);
+
+ if (IS_UNCOLLECTABLE(hhdr -> hb_obj_kind)) return;
+ /* Mark bit for these is cleared only once the object is */
+ /* explicitly deallocated. This either frees the block, or */
+ /* the bit is cleared once the object is on the free list. */
+ GC_clear_hdr_marks(hhdr);
+}
+
+/* Slow but general routines for setting/clearing/asking about mark bits */
+void GC_set_mark_bit(p)
+ptr_t p;
+{
+ register struct hblk *h = HBLKPTR(p);
+ register hdr * hhdr = HDR(h);
+ register int word_no = (word *)p - (word *)h;
+
+ set_mark_bit_from_hdr(hhdr, word_no);
+}
+
+void GC_clear_mark_bit(p)
+ptr_t p;
+{
+ register struct hblk *h = HBLKPTR(p);
+ register hdr * hhdr = HDR(h);
+ register int word_no = (word *)p - (word *)h;
+
+ clear_mark_bit_from_hdr(hhdr, word_no);
+}
+
+GC_bool GC_is_marked(p)
+ptr_t p;
+{
+ register struct hblk *h = HBLKPTR(p);
+ register hdr * hhdr = HDR(h);
+ register int word_no = (word *)p - (word *)h;
+
+ return(mark_bit_from_hdr(hhdr, word_no));
+}
+
+
+/*
+ * Clear mark bits in all allocated heap blocks. This invalidates
+ * the marker invariant, and sets GC_mark_state to reflect this.
+ * (This implicitly starts marking to reestablish the invariant.)
+ */
+void GC_clear_marks()
+{
+ GC_apply_to_all_blocks(clear_marks_for_block, (word)0);
+ GC_objects_are_marked = FALSE;
+ GC_mark_state = MS_INVALID;
+ scan_ptr = 0;
+# ifdef GATHERSTATS
+ /* Counters reflect currently marked objects: reset here */
+ GC_composite_in_use = 0;
+ GC_atomic_in_use = 0;
+# endif
+
+}
+
+/* Initiate a garbage collection. Initiates a full collection if the */
+/* mark state is invalid. */
+/*ARGSUSED*/
+void GC_initiate_gc()
+{
+ if (GC_dirty_maintained) GC_read_dirty();
+# ifdef STUBBORN_ALLOC
+ GC_read_changed();
+# endif
+# ifdef CHECKSUMS
+ {
+ extern void GC_check_dirty();
+
+ if (GC_dirty_maintained) GC_check_dirty();
+ }
+# endif
+ GC_n_rescuing_pages = 0;
+ if (GC_mark_state == MS_NONE) {
+ GC_mark_state = MS_PUSH_RESCUERS;
+ } else if (GC_mark_state != MS_INVALID) {
+ ABORT("unexpected state");
+ } /* else this is really a full collection, and mark */
+ /* bits are invalid. */
+ scan_ptr = 0;
+}
+
+
+static void alloc_mark_stack();
+
+/* Perform a small amount of marking. */
+/* We try to touch roughly a page of memory. */
+/* Return TRUE if we just finished a mark phase. */
+/* Cold_gc_frame is an address inside a GC frame that */
+/* remains valid until all marking is complete. */
+/* A zero value indicates that it's OK to miss some */
+/* register values. */
+/* We hold the allocation lock. In the case of */
+/* incremental collection, the world may not be stopped.*/
+#ifdef MSWIN32
+ /* For win32, this is called after we establish a structured */
+ /* exception handler, in case Windows unmaps one of our root */
+ /* segments. See below. In either case, we acquire the */
+ /* allocator lock long before we get here. */
+ GC_bool GC_mark_some_inner(cold_gc_frame)
+ ptr_t cold_gc_frame;
+#else
+ GC_bool GC_mark_some(cold_gc_frame)
+ ptr_t cold_gc_frame;
+#endif
+{
+ switch(GC_mark_state) {
+ case MS_NONE:
+ return(FALSE);
+
+ case MS_PUSH_RESCUERS:
+ if (GC_mark_stack_top
+ >= GC_mark_stack_limit - INITIAL_MARK_STACK_SIZE/2) {
+ /* Go ahead and mark, even though that might cause us to */
+ /* see more marked dirty objects later on. Avoid this */
+ /* in the future. */
+ GC_mark_stack_too_small = TRUE;
+ MARK_FROM_MARK_STACK();
+ return(FALSE);
+ } else {
+ scan_ptr = GC_push_next_marked_dirty(scan_ptr);
+ if (scan_ptr == 0) {
+# ifdef CONDPRINT
+ if (GC_print_stats) {
+ GC_printf1("Marked from %lu dirty pages\n",
+ (unsigned long)GC_n_rescuing_pages);
+ }
+# endif
+ GC_push_roots(FALSE, cold_gc_frame);
+ GC_objects_are_marked = TRUE;
+ if (GC_mark_state != MS_INVALID) {
+ GC_mark_state = MS_ROOTS_PUSHED;
+ }
+ }
+ }
+ return(FALSE);
+
+ case MS_PUSH_UNCOLLECTABLE:
+ if (GC_mark_stack_top
+ >= GC_mark_stack + GC_mark_stack_size/4) {
+# ifdef PARALLEL_MARK
+ /* Avoid this, since we don't parallelize the marker */
+ /* here. */
+ if (GC_parallel) GC_mark_stack_too_small = TRUE;
+# endif
+ MARK_FROM_MARK_STACK();
+ return(FALSE);
+ } else {
+ scan_ptr = GC_push_next_marked_uncollectable(scan_ptr);
+ if (scan_ptr == 0) {
+ GC_push_roots(TRUE, cold_gc_frame);
+ GC_objects_are_marked = TRUE;
+ if (GC_mark_state != MS_INVALID) {
+ GC_mark_state = MS_ROOTS_PUSHED;
+ }
+ }
+ }
+ return(FALSE);
+
+ case MS_ROOTS_PUSHED:
+# ifdef PARALLEL_MARK
+ /* In the incremental GC case, this currently doesn't */
+ /* quite do the right thing, since it runs to */
+ /* completion. On the other hand, starting a */
+ /* parallel marker is expensive, so perhaps it is */
+ /* the right thing? */
+ /* Eventually, incremental marking should run */
+ /* asynchronously in multiple threads, without grabbing */
+ /* the allocation lock. */
+ if (GC_parallel) {
+ GC_do_parallel_mark();
+ GC_ASSERT(GC_mark_stack_top < GC_first_nonempty);
+ GC_mark_stack_top = GC_mark_stack - 1;
+ if (GC_mark_stack_too_small) {
+ alloc_mark_stack(2*GC_mark_stack_size);
+ }
+ if (GC_mark_state == MS_ROOTS_PUSHED) {
+ GC_mark_state = MS_NONE;
+ return(TRUE);
+ } else {
+ return(FALSE);
+ }
+ }
+# endif
+ if (GC_mark_stack_top >= GC_mark_stack) {
+ MARK_FROM_MARK_STACK();
+ return(FALSE);
+ } else {
+ GC_mark_state = MS_NONE;
+ if (GC_mark_stack_too_small) {
+ alloc_mark_stack(2*GC_mark_stack_size);
+ }
+ return(TRUE);
+ }
+
+ case MS_INVALID:
+ case MS_PARTIALLY_INVALID:
+ if (!GC_objects_are_marked) {
+ GC_mark_state = MS_PUSH_UNCOLLECTABLE;
+ return(FALSE);
+ }
+ if (GC_mark_stack_top >= GC_mark_stack) {
+ MARK_FROM_MARK_STACK();
+ return(FALSE);
+ }
+ if (scan_ptr == 0 && GC_mark_state == MS_INVALID) {
+ /* About to start a heap scan for marked objects. */
+ /* Mark stack is empty. OK to reallocate. */
+ if (GC_mark_stack_too_small) {
+ alloc_mark_stack(2*GC_mark_stack_size);
+ }
+ GC_mark_state = MS_PARTIALLY_INVALID;
+ }
+ scan_ptr = GC_push_next_marked(scan_ptr);
+ if (scan_ptr == 0 && GC_mark_state == MS_PARTIALLY_INVALID) {
+ GC_push_roots(TRUE, cold_gc_frame);
+ GC_objects_are_marked = TRUE;
+ if (GC_mark_state != MS_INVALID) {
+ GC_mark_state = MS_ROOTS_PUSHED;
+ }
+ }
+ return(FALSE);
+ default:
+ ABORT("GC_mark_some: bad state");
+ return(FALSE);
+ }
+}
+
+
+#ifdef MSWIN32
+
+# ifdef __GNUC__
+
+ typedef struct {
+ EXCEPTION_REGISTRATION ex_reg;
+ void *alt_path;
+ } ext_ex_regn;
+
+
+ static EXCEPTION_DISPOSITION mark_ex_handler(
+ struct _EXCEPTION_RECORD *ex_rec,
+ void *est_frame,
+ struct _CONTEXT *context,
+ void *disp_ctxt)
+ {
+ if (ex_rec->ExceptionCode == STATUS_ACCESS_VIOLATION) {
+ ext_ex_regn *xer = (ext_ex_regn *)est_frame;
+
+ /* Unwind from the inner function assuming the standard */
+ /* function prologue. */
+ /* Assumes code has not been compiled with */
+ /* -fomit-frame-pointer. */
+ context->Esp = context->Ebp;
+ context->Ebp = *((DWORD *)context->Esp);
+ context->Esp = context->Esp - 8;
+
+ /* Resume execution at the "real" handler within the */
+ /* wrapper function. */
+ context->Eip = (DWORD )(xer->alt_path);
+
+ return ExceptionContinueExecution;
+
+ } else {
+ return ExceptionContinueSearch;
+ }
+ }
+# endif /* __GNUC__ */
+
+
+ GC_bool GC_mark_some(cold_gc_frame)
+ ptr_t cold_gc_frame;
+ {
+ GC_bool ret_val;
+
+# ifndef __GNUC__
+ /* Windows 98 appears to asynchronously create and remove */
+ /* writable memory mappings, for reasons we haven't yet */
+ /* understood. Since we look for writable regions to */
+ /* determine the root set, we may try to mark from an */
+ /* address range that disappeared since we started the */
+ /* collection. Thus we have to recover from faults here. */
+ /* This code does not appear to be necessary for Windows */
+ /* 95/NT/2000. Note that this code should never generate */
+ /* an incremental GC write fault. */
+
+ __try {
+
+# else /* __GNUC__ */
+
+ /* Manually install an exception handler since GCC does */
+ /* not yet support Structured Exception Handling (SEH) on */
+ /* Win32. */
+
+ ext_ex_regn er;
+
+ er.alt_path = &&handle_ex;
+ er.ex_reg.handler = mark_ex_handler;
+ asm volatile ("movl %%fs:0, %0" : "=r" (er.ex_reg.prev));
+ asm volatile ("movl %0, %%fs:0" : : "r" (&er));
+
+# endif /* __GNUC__ */
+
+ ret_val = GC_mark_some_inner(cold_gc_frame);
+
+# ifndef __GNUC__
+
+ } __except (GetExceptionCode() == EXCEPTION_ACCESS_VIOLATION ?
+ EXCEPTION_EXECUTE_HANDLER : EXCEPTION_CONTINUE_SEARCH) {
+
+# else /* __GNUC__ */
+
+ /* Prevent GCC from considering the following code unreachable */
+ /* and thus eliminating it. */
+ if (er.alt_path != 0)
+ goto rm_handler;
+
+handle_ex:
+ /* Execution resumes from here on an access violation. */
+
+# endif /* __GNUC__ */
+
+# ifdef CONDPRINT
+ if (GC_print_stats) {
+ GC_printf0("Caught ACCESS_VIOLATION in marker. "
+ "Memory mapping disappeared.\n");
+ }
+# endif /* CONDPRINT */
+
+ /* We have bad roots on the stack. Discard mark stack. */
+ /* Rescan from marked objects. Redetermine roots. */
+ GC_invalidate_mark_state();
+ scan_ptr = 0;
+
+ ret_val = FALSE;
+
+# ifndef __GNUC__
+
+ }
+
+# else /* __GNUC__ */
+
+rm_handler:
+ /* Uninstall the exception handler */
+ asm volatile ("mov %0, %%fs:0" : : "r" (er.ex_reg.prev));
+
+# endif /* __GNUC__ */
+
+ return ret_val;
+ }
+#endif /* MSWIN32 */
+
+
+GC_bool GC_mark_stack_empty()
+{
+ return(GC_mark_stack_top < GC_mark_stack);
+}
+
+#ifdef PROF_MARKER
+ word GC_prof_array[10];
+# define PROF(n) GC_prof_array[n]++
+#else
+# define PROF(n)
+#endif
+
+/* Given a pointer to someplace other than a small object page or the */
+/* first page of a large object, either: */
+/* - return a pointer to somewhere in the first page of the large */
+/* object, if current points to a large object. */
+/* In this case *hhdr is replaced with a pointer to the header */
+/* for the large object. */
+/* - just return current if it does not point to a large object. */
+/*ARGSUSED*/
+ptr_t GC_find_start(current, hhdr, new_hdr_p)
+register ptr_t current;
+register hdr *hhdr, **new_hdr_p;
+{
+ if (GC_all_interior_pointers) {
+ if (hhdr != 0) {
+ register ptr_t orig = current;
+
+ current = (ptr_t)HBLKPTR(current);
+ do {
+ current = current - HBLKSIZE*(word)hhdr;
+ hhdr = HDR(current);
+ } while(IS_FORWARDING_ADDR_OR_NIL(hhdr));
+ /* current points to near the start of the large object */
+ if (hhdr -> hb_flags & IGNORE_OFF_PAGE) return(orig);
+ if ((word *)orig - (word *)current
+ >= (ptrdiff_t)(hhdr->hb_sz)) {
+ /* Pointer past the end of the block */
+ return(orig);
+ }
+ *new_hdr_p = hhdr;
+ return(current);
+ } else {
+ return(current);
+ }
+ } else {
+ return(current);
+ }
+}
+
+void GC_invalidate_mark_state()
+{
+ GC_mark_state = MS_INVALID;
+ GC_mark_stack_top = GC_mark_stack-1;
+}
+
+mse * GC_signal_mark_stack_overflow(msp)
+mse * msp;
+{
+ GC_mark_state = MS_INVALID;
+ GC_mark_stack_too_small = TRUE;
+# ifdef CONDPRINT
+ if (GC_print_stats) {
+ GC_printf1("Mark stack overflow; current size = %lu entries\n",
+ GC_mark_stack_size);
+ }
+# endif
+ return(msp - GC_MARK_STACK_DISCARDS);
+}
+
+/*
+ * Mark objects pointed to by the regions described by
+ * mark stack entries between GC_mark_stack and GC_mark_stack_top,
+ * inclusive. Assumes the upper limit of a mark stack entry
+ * is never 0. A mark stack entry never has size 0.
+ * We try to traverse on the order of a hblk of memory before we return.
+ * Caller is responsible for calling this until the mark stack is empty.
+ * Note that this is the most performance critical routine in the
+ * collector. Hence it contains all sorts of ugly hacks to speed
+ * things up. In particular, we avoid procedure calls on the common
+ * path, we take advantage of peculiarities of the mark descriptor
+ * encoding, we optionally maintain a cache for the block address to
+ * header mapping, we prefetch when an object is "grayed", etc.
+ */
+mse * GC_mark_from(mark_stack_top, mark_stack, mark_stack_limit)
+mse * mark_stack_top;
+mse * mark_stack;
+mse * mark_stack_limit;
+{
+ int credit = HBLKSIZE; /* Remaining credit for marking work */
+ register word * current_p; /* Pointer to current candidate ptr. */
+ register word current; /* Candidate pointer. */
+ register word * limit; /* (Incl) limit of current candidate */
+ /* range */
+ register word descr;
+ register ptr_t greatest_ha = GC_greatest_plausible_heap_addr;
+ register ptr_t least_ha = GC_least_plausible_heap_addr;
+ DECLARE_HDR_CACHE;
+
+# define SPLIT_RANGE_WORDS 128 /* Must be power of 2. */
+
+ GC_objects_are_marked = TRUE;
+ INIT_HDR_CACHE;
+# ifdef OS2 /* Use untweaked version to circumvent compiler problem */
+ while (mark_stack_top >= mark_stack && credit >= 0) {
+# else
+ while ((((ptr_t)mark_stack_top - (ptr_t)mark_stack) | credit)
+ >= 0) {
+# endif
+ current_p = mark_stack_top -> mse_start;
+ descr = mark_stack_top -> mse_descr;
+ retry:
+ /* current_p and descr describe the current object. */
+ /* *mark_stack_top is vacant. */
+ /* The following is 0 only for small objects described by a simple */
+ /* length descriptor. For many applications this is the common */
+ /* case, so we try to detect it quickly. */
+ if (descr & ((~(WORDS_TO_BYTES(SPLIT_RANGE_WORDS) - 1)) | GC_DS_TAGS)) {
+ word tag = descr & GC_DS_TAGS;
+
+ switch(tag) {
+ case GC_DS_LENGTH:
+ /* Large length. */
+ /* Process part of the range to avoid pushing too much on the */
+ /* stack. */
+ GC_ASSERT(descr < (word)GC_greatest_plausible_heap_addr
+ - (word)GC_least_plausible_heap_addr);
+# ifdef PARALLEL_MARK
+# define SHARE_BYTES 2048
+ if (descr > SHARE_BYTES && GC_parallel
+ && mark_stack_top < mark_stack_limit - 1) {
+ int new_size = (descr/2) & ~(sizeof(word)-1);
+ mark_stack_top -> mse_start = current_p;
+ mark_stack_top -> mse_descr = new_size + sizeof(word);
+ /* makes sure we handle */
+ /* misaligned pointers. */
+ mark_stack_top++;
+ current_p = (word *) ((char *)current_p + new_size);
+ descr -= new_size;
+ goto retry;
+ }
+# endif /* PARALLEL_MARK */
+ mark_stack_top -> mse_start =
+ limit = current_p + SPLIT_RANGE_WORDS-1;
+ mark_stack_top -> mse_descr =
+ descr - WORDS_TO_BYTES(SPLIT_RANGE_WORDS-1);
+ /* Make sure that pointers overlapping the two ranges are */
+ /* considered. */
+ limit = (word *)((char *)limit + sizeof(word) - ALIGNMENT);
+ break;
+ case GC_DS_BITMAP:
+ mark_stack_top--;
+ descr &= ~GC_DS_TAGS;
+ credit -= WORDS_TO_BYTES(WORDSZ/2); /* guess */
+ while (descr != 0) {
+ if ((signed_word)descr < 0) {
+ current = *current_p;
+ FIXUP_POINTER(current);
+ if ((ptr_t)current >= least_ha && (ptr_t)current < greatest_ha) {
+ PREFETCH((ptr_t)current);
+ HC_PUSH_CONTENTS((ptr_t)current, mark_stack_top,
+ mark_stack_limit, current_p, exit1);
+ }
+ }
+ descr <<= 1;
+ ++ current_p;
+ }
+ continue;
+ case GC_DS_PROC:
+ mark_stack_top--;
+ credit -= GC_PROC_BYTES;
+ mark_stack_top =
+ (*PROC(descr))
+ (current_p, mark_stack_top,
+ mark_stack_limit, ENV(descr));
+ continue;
+ case GC_DS_PER_OBJECT:
+ if ((signed_word)descr >= 0) {
+ /* Descriptor is in the object. */
+ descr = *(word *)((ptr_t)current_p + descr - GC_DS_PER_OBJECT);
+ } else {
+ /* Descriptor is in type descriptor pointed to by first */
+ /* word in object. */
+ ptr_t type_descr = *(ptr_t *)current_p;
+ /* type_descr is either a valid pointer to the descriptor */
+ /* structure, or this object was on a free list. If it */
+ /* it was anything but the last object on the free list, */
+ /* we will misinterpret the next object on the free list as */
+ /* the type descriptor, and get a 0 GC descriptor, which */
+ /* is ideal. Unfortunately, we need to check for the last */
+ /* object case explicitly. */
+ if (0 == type_descr) {
+ /* Rarely executed. */
+ mark_stack_top--;
+ continue;
+ }
+ descr = *(word *)(type_descr
+ - (descr - (GC_DS_PER_OBJECT
+ - GC_INDIR_PER_OBJ_BIAS)));
+ }
+ if (0 == descr) {
+ /* Can happen either because we generated a 0 descriptor */
+ /* or we saw a pointer to a free object. */
+ mark_stack_top--;
+ continue;
+ }
+ goto retry;
+ }
+ } else /* Small object with length descriptor */ {
+ mark_stack_top--;
+ limit = (word *)(((ptr_t)current_p) + (word)descr);
+ }
+ /* The simple case in which we're scanning a range. */
+ GC_ASSERT(!((word)current_p & (ALIGNMENT-1)));
+ credit -= (ptr_t)limit - (ptr_t)current_p;
+ limit -= 1;
+ {
+# define PREF_DIST 4
+
+# ifndef SMALL_CONFIG
+ word deferred;
+
+ /* Try to prefetch the next pointer to be examined asap. */
+ /* Empirically, this also seems to help slightly without */
+ /* prefetches, at least on linux/X86. Presumably this loop */
+ /* ends up with less register pressure, and gcc thus ends up */
+ /* generating slightly better code. Overall gcc code quality */
+ /* for this loop is still not great. */
+ for(;;) {
+ PREFETCH((ptr_t)limit - PREF_DIST*CACHE_LINE_SIZE);
+ GC_ASSERT(limit >= current_p);
+ deferred = *limit;
+ FIXUP_POINTER(deferred);
+ limit = (word *)((char *)limit - ALIGNMENT);
+ if ((ptr_t)deferred >= least_ha && (ptr_t)deferred < greatest_ha) {
+ PREFETCH((ptr_t)deferred);
+ break;
+ }
+ if (current_p > limit) goto next_object;
+ /* Unroll once, so we don't do too many of the prefetches */
+ /* based on limit. */
+ deferred = *limit;
+ FIXUP_POINTER(deferred);
+ limit = (word *)((char *)limit - ALIGNMENT);
+ if ((ptr_t)deferred >= least_ha && (ptr_t)deferred < greatest_ha) {
+ PREFETCH((ptr_t)deferred);
+ break;
+ }
+ if (current_p > limit) goto next_object;
+ }
+# endif
+
+ while (current_p <= limit) {
+ /* Empirically, unrolling this loop doesn't help a lot. */
+ /* Since HC_PUSH_CONTENTS expands to a lot of code, */
+ /* we don't. */
+ current = *current_p;
+ FIXUP_POINTER(current);
+ PREFETCH((ptr_t)current_p + PREF_DIST*CACHE_LINE_SIZE);
+ if ((ptr_t)current >= least_ha && (ptr_t)current < greatest_ha) {
+ /* Prefetch the contents of the object we just pushed. It's */
+ /* likely we will need them soon. */
+ PREFETCH((ptr_t)current);
+ HC_PUSH_CONTENTS((ptr_t)current, mark_stack_top,
+ mark_stack_limit, current_p, exit2);
+ }
+ current_p = (word *)((char *)current_p + ALIGNMENT);
+ }
+
+# ifndef SMALL_CONFIG
+ /* We still need to mark the entry we previously prefetched. */
+ /* We alrady know that it passes the preliminary pointer */
+ /* validity test. */
+ HC_PUSH_CONTENTS((ptr_t)deferred, mark_stack_top,
+ mark_stack_limit, current_p, exit4);
+ next_object:;
+# endif
+ }
+ }
+ return mark_stack_top;
+}
+
+#ifdef PARALLEL_MARK
+
+/* We assume we have an ANSI C Compiler. */
+GC_bool GC_help_wanted = FALSE;
+unsigned GC_helper_count = 0;
+unsigned GC_active_count = 0;
+mse * VOLATILE GC_first_nonempty;
+word GC_mark_no = 0;
+
+#define LOCAL_MARK_STACK_SIZE HBLKSIZE
+ /* Under normal circumstances, this is big enough to guarantee */
+ /* We don't overflow half of it in a single call to */
+ /* GC_mark_from. */
+
+
+/* Steal mark stack entries starting at mse low into mark stack local */
+/* until we either steal mse high, or we have max entries. */
+/* Return a pointer to the top of the local mark stack. */
+/* *next is replaced by a pointer to the next unscanned mark stack */
+/* entry. */
+mse * GC_steal_mark_stack(mse * low, mse * high, mse * local,
+ unsigned max, mse **next)
+{
+ mse *p;
+ mse *top = local - 1;
+ unsigned i = 0;
+
+ /* Make sure that prior writes to the mark stack are visible. */
+ /* On some architectures, the fact that the reads are */
+ /* volatile should suffice. */
+# if !defined(IA64) && !defined(HP_PA) && !defined(I386)
+ GC_memory_barrier();
+# endif
+ GC_ASSERT(high >= low-1 && high - low + 1 <= GC_mark_stack_size);
+ for (p = low; p <= high && i <= max; ++p) {
+ word descr = *(volatile word *) &(p -> mse_descr);
+ /* In the IA64 memory model, the following volatile store is */
+ /* ordered after this read of descr. Thus a thread must read */
+ /* the original nonzero value. HP_PA appears to be similar, */
+ /* and if I'm reading the P4 spec correctly, X86 is probably */
+ /* also OK. In some other cases we need a barrier. */
+# if !defined(IA64) && !defined(HP_PA) && !defined(I386)
+ GC_memory_barrier();
+# endif
+ if (descr != 0) {
+ *(volatile word *) &(p -> mse_descr) = 0;
+ /* More than one thread may get this entry, but that's only */
+ /* a minor performance problem. */
+ ++top;
+ top -> mse_descr = descr;
+ top -> mse_start = p -> mse_start;
+ GC_ASSERT( (top -> mse_descr & GC_DS_TAGS) != GC_DS_LENGTH ||
+ top -> mse_descr < (ptr_t)GC_greatest_plausible_heap_addr
+ - (ptr_t)GC_least_plausible_heap_addr);
+ /* If this is a big object, count it as */
+ /* size/256 + 1 objects. */
+ ++i;
+ if ((descr & GC_DS_TAGS) == GC_DS_LENGTH) i += (descr >> 8);
+ }
+ }
+ *next = p;
+ return top;
+}
+
+/* Copy back a local mark stack. */
+/* low and high are inclusive bounds. */
+void GC_return_mark_stack(mse * low, mse * high)
+{
+ mse * my_top;
+ mse * my_start;
+ size_t stack_size;
+
+ if (high < low) return;
+ stack_size = high - low + 1;
+ GC_acquire_mark_lock();
+ my_top = GC_mark_stack_top;
+ my_start = my_top + 1;
+ if (my_start - GC_mark_stack + stack_size > GC_mark_stack_size) {
+# ifdef CONDPRINT
+ if (GC_print_stats) {
+ GC_printf0("No room to copy back mark stack.");
+ }
+# endif
+ GC_mark_state = MS_INVALID;
+ GC_mark_stack_too_small = TRUE;
+ /* We drop the local mark stack. We'll fix things later. */
+ } else {
+ BCOPY(low, my_start, stack_size * sizeof(mse));
+ GC_ASSERT(GC_mark_stack_top = my_top);
+# if !defined(IA64) && !defined(HP_PA)
+ GC_memory_barrier();
+# endif
+ /* On IA64, the volatile write acts as a release barrier. */
+ GC_mark_stack_top = my_top + stack_size;
+ }
+ GC_release_mark_lock();
+ GC_notify_all_marker();
+}
+
+/* Mark from the local mark stack. */
+/* On return, the local mark stack is empty. */
+/* But this may be achieved by copying the */
+/* local mark stack back into the global one. */
+void GC_do_local_mark(mse *local_mark_stack, mse *local_top)
+{
+ unsigned n;
+# define N_LOCAL_ITERS 1
+
+# ifdef GC_ASSERTIONS
+ /* Make sure we don't hold mark lock. */
+ GC_acquire_mark_lock();
+ GC_release_mark_lock();
+# endif
+ for (;;) {
+ for (n = 0; n < N_LOCAL_ITERS; ++n) {
+ local_top = GC_mark_from(local_top, local_mark_stack,
+ local_mark_stack + LOCAL_MARK_STACK_SIZE);
+ if (local_top < local_mark_stack) return;
+ if (local_top - local_mark_stack >= LOCAL_MARK_STACK_SIZE/2) {
+ GC_return_mark_stack(local_mark_stack, local_top);
+ return;
+ }
+ }
+ if (GC_mark_stack_top < GC_first_nonempty &&
+ GC_active_count < GC_helper_count
+ && local_top > local_mark_stack + 1) {
+ /* Try to share the load, since the main stack is empty, */
+ /* and helper threads are waiting for a refill. */
+ /* The entries near the bottom of the stack are likely */
+ /* to require more work. Thus we return those, eventhough */
+ /* it's harder. */
+ mse * p;
+ mse * new_bottom = local_mark_stack
+ + (local_top - local_mark_stack)/2;
+ GC_ASSERT(new_bottom > local_mark_stack
+ && new_bottom < local_top);
+ GC_return_mark_stack(local_mark_stack, new_bottom - 1);
+ memmove(local_mark_stack, new_bottom,
+ (local_top - new_bottom + 1) * sizeof(mse));
+ local_top -= (new_bottom - local_mark_stack);
+ }
+ }
+}
+
+#define ENTRIES_TO_GET 5
+
+long GC_markers = 2; /* Normally changed by thread-library- */
+ /* -specific code. */
+
+/* Mark using the local mark stack until the global mark stack is empty */
+/* and there are no active workers. Update GC_first_nonempty to reflect */
+/* progress. */
+/* Caller does not hold mark lock. */
+/* Caller has already incremented GC_helper_count. We decrement it, */
+/* and maintain GC_active_count. */
+void GC_mark_local(mse *local_mark_stack, int id)
+{
+ mse * my_first_nonempty;
+
+ GC_acquire_mark_lock();
+ GC_active_count++;
+ my_first_nonempty = GC_first_nonempty;
+ GC_ASSERT(GC_first_nonempty >= GC_mark_stack &&
+ GC_first_nonempty <= GC_mark_stack_top + 1);
+# ifdef PRINTSTATS
+ GC_printf1("Starting mark helper %lu\n", (unsigned long)id);
+# endif
+ GC_release_mark_lock();
+ for (;;) {
+ size_t n_on_stack;
+ size_t n_to_get;
+ mse *next;
+ mse * my_top;
+ mse * local_top;
+ mse * global_first_nonempty = GC_first_nonempty;
+
+ GC_ASSERT(my_first_nonempty >= GC_mark_stack &&
+ my_first_nonempty <= GC_mark_stack_top + 1);
+ GC_ASSERT(global_first_nonempty >= GC_mark_stack &&
+ global_first_nonempty <= GC_mark_stack_top + 1);
+ if (my_first_nonempty < global_first_nonempty) {
+ my_first_nonempty = global_first_nonempty;
+ } else if (global_first_nonempty < my_first_nonempty) {
+ GC_compare_and_exchange((word *)(&GC_first_nonempty),
+ (word) global_first_nonempty,
+ (word) my_first_nonempty);
+ /* If this fails, we just go ahead, without updating */
+ /* GC_first_nonempty. */
+ }
+ /* Perhaps we should also update GC_first_nonempty, if it */
+ /* is less. But that would require using atomic updates. */
+ my_top = GC_mark_stack_top;
+ n_on_stack = my_top - my_first_nonempty + 1;
+ if (0 == n_on_stack) {
+ GC_acquire_mark_lock();
+ my_top = GC_mark_stack_top;
+ n_on_stack = my_top - my_first_nonempty + 1;
+ if (0 == n_on_stack) {
+ GC_active_count--;
+ GC_ASSERT(GC_active_count <= GC_helper_count);
+ /* Other markers may redeposit objects */
+ /* on the stack. */
+ if (0 == GC_active_count) GC_notify_all_marker();
+ while (GC_active_count > 0
+ && GC_first_nonempty > GC_mark_stack_top) {
+ /* We will be notified if either GC_active_count */
+ /* reaches zero, or if more objects are pushed on */
+ /* the global mark stack. */
+ GC_wait_marker();
+ }
+ if (GC_active_count == 0 &&
+ GC_first_nonempty > GC_mark_stack_top) {
+ GC_bool need_to_notify = FALSE;
+ /* The above conditions can't be falsified while we */
+ /* hold the mark lock, since neither */
+ /* GC_active_count nor GC_mark_stack_top can */
+ /* change. GC_first_nonempty can only be */
+ /* incremented asynchronously. Thus we know that */
+ /* both conditions actually held simultaneously. */
+ GC_helper_count--;
+ if (0 == GC_helper_count) need_to_notify = TRUE;
+# ifdef PRINTSTATS
+ GC_printf1(
+ "Finished mark helper %lu\n", (unsigned long)id);
+# endif
+ GC_release_mark_lock();
+ if (need_to_notify) GC_notify_all_marker();
+ return;
+ }
+ /* else there's something on the stack again, or */
+ /* another helper may push something. */
+ GC_active_count++;
+ GC_ASSERT(GC_active_count > 0);
+ GC_release_mark_lock();
+ continue;
+ } else {
+ GC_release_mark_lock();
+ }
+ }
+ n_to_get = ENTRIES_TO_GET;
+ if (n_on_stack < 2 * ENTRIES_TO_GET) n_to_get = 1;
+ local_top = GC_steal_mark_stack(my_first_nonempty, my_top,
+ local_mark_stack, n_to_get,
+ &my_first_nonempty);
+ GC_ASSERT(my_first_nonempty >= GC_mark_stack &&
+ my_first_nonempty <= GC_mark_stack_top + 1);
+ GC_do_local_mark(local_mark_stack, local_top);
+ }
+}
+
+/* Perform Parallel mark. */
+/* We hold the GC lock, not the mark lock. */
+/* Currently runs until the mark stack is */
+/* empty. */
+void GC_do_parallel_mark()
+{
+ mse local_mark_stack[LOCAL_MARK_STACK_SIZE];
+ mse * local_top;
+ mse * my_top;
+
+ GC_acquire_mark_lock();
+ GC_ASSERT(I_HOLD_LOCK());
+ /* This could be a GC_ASSERT, but it seems safer to keep it on */
+ /* all the time, especially since it's cheap. */
+ if (GC_help_wanted || GC_active_count != 0 || GC_helper_count != 0)
+ ABORT("Tried to start parallel mark in bad state");
+# ifdef PRINTSTATS
+ GC_printf1("Starting marking for mark phase number %lu\n",
+ (unsigned long)GC_mark_no);
+# endif
+ GC_first_nonempty = GC_mark_stack;
+ GC_active_count = 0;
+ GC_helper_count = 1;
+ GC_help_wanted = TRUE;
+ GC_release_mark_lock();
+ GC_notify_all_marker();
+ /* Wake up potential helpers. */
+ GC_mark_local(local_mark_stack, 0);
+ GC_acquire_mark_lock();
+ GC_help_wanted = FALSE;
+ /* Done; clean up. */
+ while (GC_helper_count > 0) GC_wait_marker();
+ /* GC_helper_count cannot be incremented while GC_help_wanted == FALSE */
+# ifdef PRINTSTATS
+ GC_printf1(
+ "Finished marking for mark phase number %lu\n",
+ (unsigned long)GC_mark_no);
+# endif
+ GC_mark_no++;
+ GC_release_mark_lock();
+ GC_notify_all_marker();
+}
+
+
+/* Try to help out the marker, if it's running. */
+/* We do not hold the GC lock, but the requestor does. */
+void GC_help_marker(word my_mark_no)
+{
+ mse local_mark_stack[LOCAL_MARK_STACK_SIZE];
+ unsigned my_id;
+ mse * my_first_nonempty;
+
+ if (!GC_parallel) return;
+ GC_acquire_mark_lock();
+ while (GC_mark_no < my_mark_no
+ || !GC_help_wanted && GC_mark_no == my_mark_no) {
+ GC_wait_marker();
+ }
+ my_id = GC_helper_count;
+ if (GC_mark_no != my_mark_no || my_id >= GC_markers) {
+ /* Second test is useful only if original threads can also */
+ /* act as helpers. Under Linux they can't. */
+ GC_release_mark_lock();
+ return;
+ }
+ GC_helper_count = my_id + 1;
+ GC_release_mark_lock();
+ GC_mark_local(local_mark_stack, my_id);
+ /* GC_mark_local decrements GC_helper_count. */
+}
+
+#endif /* PARALLEL_MARK */
+
+/* Allocate or reallocate space for mark stack of size s words */
+/* May silently fail. */
+static void alloc_mark_stack(n)
+word n;
+{
+ mse * new_stack = (mse *)GC_scratch_alloc(n * sizeof(struct GC_ms_entry));
+
+ GC_mark_stack_too_small = FALSE;
+ if (GC_mark_stack_size != 0) {
+ if (new_stack != 0) {
+ word displ = (word)GC_mark_stack & (GC_page_size - 1);
+ signed_word size = GC_mark_stack_size * sizeof(struct GC_ms_entry);
+
+ /* Recycle old space */
+ if (0 != displ) displ = GC_page_size - displ;
+ size = (size - displ) & ~(GC_page_size - 1);
+ if (size > 0) {
+ GC_add_to_heap((struct hblk *)
+ ((word)GC_mark_stack + displ), (word)size);
+ }
+ GC_mark_stack = new_stack;
+ GC_mark_stack_size = n;
+ GC_mark_stack_limit = new_stack + n;
+# ifdef CONDPRINT
+ if (GC_print_stats) {
+ GC_printf1("Grew mark stack to %lu frames\n",
+ (unsigned long) GC_mark_stack_size);
+ }
+# endif
+ } else {
+# ifdef CONDPRINT
+ if (GC_print_stats) {
+ GC_printf1("Failed to grow mark stack to %lu frames\n",
+ (unsigned long) n);
+ }
+# endif
+ }
+ } else {
+ if (new_stack == 0) {
+ GC_err_printf0("No space for mark stack\n");
+ EXIT();
+ }
+ GC_mark_stack = new_stack;
+ GC_mark_stack_size = n;
+ GC_mark_stack_limit = new_stack + n;
+ }
+ GC_mark_stack_top = GC_mark_stack-1;
+}
+
+void GC_mark_init()
+{
+ alloc_mark_stack(INITIAL_MARK_STACK_SIZE);
+}
+
+/*
+ * Push all locations between b and t onto the mark stack.
+ * b is the first location to be checked. t is one past the last
+ * location to be checked.
+ * Should only be used if there is no possibility of mark stack
+ * overflow.
+ */
+void GC_push_all(bottom, top)
+ptr_t bottom;
+ptr_t top;
+{
+ register word length;
+
+ bottom = (ptr_t)(((word) bottom + ALIGNMENT-1) & ~(ALIGNMENT-1));
+ top = (ptr_t)(((word) top) & ~(ALIGNMENT-1));
+ if (top == 0 || bottom == top) return;
+ GC_mark_stack_top++;
+ if (GC_mark_stack_top >= GC_mark_stack_limit) {
+ ABORT("unexpected mark stack overflow");
+ }
+ length = top - bottom;
+# if GC_DS_TAGS > ALIGNMENT - 1
+ length += GC_DS_TAGS;
+ length &= ~GC_DS_TAGS;
+# endif
+ GC_mark_stack_top -> mse_start = (word *)bottom;
+ GC_mark_stack_top -> mse_descr = length;
+}
+
+/*
+ * Analogous to the above, but push only those pages h with dirty_fn(h) != 0.
+ * We use push_fn to actually push the block.
+ * Used both to selectively push dirty pages, or to push a block
+ * in piecemeal fashion, to allow for more marking concurrency.
+ * Will not overflow mark stack if push_fn pushes a small fixed number
+ * of entries. (This is invoked only if push_fn pushes a single entry,
+ * or if it marks each object before pushing it, thus ensuring progress
+ * in the event of a stack overflow.)
+ */
+void GC_push_selected(bottom, top, dirty_fn, push_fn)
+ptr_t bottom;
+ptr_t top;
+int (*dirty_fn) GC_PROTO((struct hblk * h));
+void (*push_fn) GC_PROTO((ptr_t bottom, ptr_t top));
+{
+ register struct hblk * h;
+
+ bottom = (ptr_t)(((long) bottom + ALIGNMENT-1) & ~(ALIGNMENT-1));
+ top = (ptr_t)(((long) top) & ~(ALIGNMENT-1));
+
+ if (top == 0 || bottom == top) return;
+ h = HBLKPTR(bottom + HBLKSIZE);
+ if (top <= (ptr_t) h) {
+ if ((*dirty_fn)(h-1)) {
+ (*push_fn)(bottom, top);
+ }
+ return;
+ }
+ if ((*dirty_fn)(h-1)) {
+ (*push_fn)(bottom, (ptr_t)h);
+ }
+ while ((ptr_t)(h+1) <= top) {
+ if ((*dirty_fn)(h)) {
+ if ((word)(GC_mark_stack_top - GC_mark_stack)
+ > 3 * GC_mark_stack_size / 4) {
+ /* Danger of mark stack overflow */
+ (*push_fn)((ptr_t)h, top);
+ return;
+ } else {
+ (*push_fn)((ptr_t)h, (ptr_t)(h+1));
+ }
+ }
+ h++;
+ }
+ if ((ptr_t)h != top) {
+ if ((*dirty_fn)(h)) {
+ (*push_fn)((ptr_t)h, top);
+ }
+ }
+ if (GC_mark_stack_top >= GC_mark_stack_limit) {
+ ABORT("unexpected mark stack overflow");
+ }
+}
+
+# ifndef SMALL_CONFIG
+
+#ifdef PARALLEL_MARK
+ /* Break up root sections into page size chunks to better spread */
+ /* out work. */
+ GC_bool GC_true_func(struct hblk *h) { return TRUE; }
+# define GC_PUSH_ALL(b,t) GC_push_selected(b,t,GC_true_func,GC_push_all);
+#else
+# define GC_PUSH_ALL(b,t) GC_push_all(b,t);
+#endif
+
+
+void GC_push_conditional(bottom, top, all)
+ptr_t bottom;
+ptr_t top;
+int all;
+{
+ if (all) {
+ if (GC_dirty_maintained) {
+# ifdef PROC_VDB
+ /* Pages that were never dirtied cannot contain pointers */
+ GC_push_selected(bottom, top, GC_page_was_ever_dirty, GC_push_all);
+# else
+ GC_push_all(bottom, top);
+# endif
+ } else {
+ GC_push_all(bottom, top);
+ }
+ } else {
+ GC_push_selected(bottom, top, GC_page_was_dirty, GC_push_all);
+ }
+}
+#endif
+
+# if defined(MSWIN32) || defined(MSWINCE)
+ void __cdecl GC_push_one(p)
+# else
+ void GC_push_one(p)
+# endif
+word p;
+{
+ GC_PUSH_ONE_STACK(p, MARKED_FROM_REGISTER);
+}
+
+struct GC_ms_entry *GC_mark_and_push(obj, mark_stack_ptr, mark_stack_limit, src)
+GC_PTR obj;
+struct GC_ms_entry * mark_stack_ptr;
+struct GC_ms_entry * mark_stack_limit;
+GC_PTR *src;
+{
+ PREFETCH(obj);
+ PUSH_CONTENTS(obj, mark_stack_ptr /* modified */, mark_stack_limit, src,
+ was_marked /* internally generated exit label */);
+ return mark_stack_ptr;
+}
+
+# ifdef __STDC__
+# define BASE(p) (word)GC_base((void *)(p))
+# else
+# define BASE(p) (word)GC_base((char *)(p))
+# endif
+
+/* Mark and push (i.e. gray) a single object p onto the main */
+/* mark stack. Consider p to be valid if it is an interior */
+/* pointer. */
+/* The object p has passed a preliminary pointer validity */
+/* test, but we do not definitely know whether it is valid. */
+/* Mark bits are NOT atomically updated. Thus this must be the */
+/* only thread setting them. */
+# if defined(PRINT_BLACK_LIST) || defined(KEEP_BACK_PTRS)
+ void GC_mark_and_push_stack(p, source)
+ ptr_t source;
+# else
+ void GC_mark_and_push_stack(p)
+# define source 0
+# endif
+register word p;
+{
+ register word r;
+ register hdr * hhdr;
+ register int displ;
+
+ GET_HDR(p, hhdr);
+ if (IS_FORWARDING_ADDR_OR_NIL(hhdr)) {
+ if (hhdr != 0) {
+ r = BASE(p);
+ hhdr = HDR(r);
+ displ = BYTES_TO_WORDS(HBLKDISPL(r));
+ }
+ } else {
+ register map_entry_type map_entry;
+
+ displ = HBLKDISPL(p);
+ map_entry = MAP_ENTRY((hhdr -> hb_map), displ);
+ if (map_entry >= MAX_OFFSET) {
+ if (map_entry == OFFSET_TOO_BIG || !GC_all_interior_pointers) {
+ r = BASE(p);
+ displ = BYTES_TO_WORDS(HBLKDISPL(r));
+ if (r == 0) hhdr = 0;
+ } else {
+ /* Offset invalid, but map reflects interior pointers */
+ hhdr = 0;
+ }
+ } else {
+ displ = BYTES_TO_WORDS(displ);
+ displ -= map_entry;
+ r = (word)((word *)(HBLKPTR(p)) + displ);
+ }
+ }
+ /* If hhdr != 0 then r == GC_base(p), only we did it faster. */
+ /* displ is the word index within the block. */
+ if (hhdr == 0) {
+# ifdef PRINT_BLACK_LIST
+ GC_add_to_black_list_stack(p, source);
+# else
+ GC_add_to_black_list_stack(p);
+# endif
+# undef source /* In case we had to define it. */
+ } else {
+ if (!mark_bit_from_hdr(hhdr, displ)) {
+ set_mark_bit_from_hdr(hhdr, displ);
+ GC_STORE_BACK_PTR(source, (ptr_t)r);
+ PUSH_OBJ((word *)r, hhdr, GC_mark_stack_top,
+ GC_mark_stack_limit);
+ }
+ }
+}
+
+# ifdef TRACE_BUF
+
+# define TRACE_ENTRIES 1000
+
+struct trace_entry {
+ char * kind;
+ word gc_no;
+ word words_allocd;
+ word arg1;
+ word arg2;
+} GC_trace_buf[TRACE_ENTRIES];
+
+int GC_trace_buf_ptr = 0;
+
+void GC_add_trace_entry(char *kind, word arg1, word arg2)
+{
+ GC_trace_buf[GC_trace_buf_ptr].kind = kind;
+ GC_trace_buf[GC_trace_buf_ptr].gc_no = GC_gc_no;
+ GC_trace_buf[GC_trace_buf_ptr].words_allocd = GC_words_allocd;
+ GC_trace_buf[GC_trace_buf_ptr].arg1 = arg1 ^ 0x80000000;
+ GC_trace_buf[GC_trace_buf_ptr].arg2 = arg2 ^ 0x80000000;
+ GC_trace_buf_ptr++;
+ if (GC_trace_buf_ptr >= TRACE_ENTRIES) GC_trace_buf_ptr = 0;
+}
+
+void GC_print_trace(word gc_no, GC_bool lock)
+{
+ int i;
+ struct trace_entry *p;
+
+ if (lock) LOCK();
+ for (i = GC_trace_buf_ptr-1; i != GC_trace_buf_ptr; i--) {
+ if (i < 0) i = TRACE_ENTRIES-1;
+ p = GC_trace_buf + i;
+ if (p -> gc_no < gc_no || p -> kind == 0) return;
+ printf("Trace:%s (gc:%d,words:%d) 0x%X, 0x%X\n",
+ p -> kind, p -> gc_no, p -> words_allocd,
+ (p -> arg1) ^ 0x80000000, (p -> arg2) ^ 0x80000000);
+ }
+ printf("Trace incomplete\n");
+ if (lock) UNLOCK();
+}
+
+# endif /* TRACE_BUF */
+
+/*
+ * A version of GC_push_all that treats all interior pointers as valid
+ * and scans the entire region immediately, in case the contents
+ * change.
+ */
+void GC_push_all_eager(bottom, top)
+ptr_t bottom;
+ptr_t top;
+{
+ word * b = (word *)(((word) bottom + ALIGNMENT-1) & ~(ALIGNMENT-1));
+ word * t = (word *)(((word) top) & ~(ALIGNMENT-1));
+ register word *p;
+ register word q;
+ register word *lim;
+ register ptr_t greatest_ha = GC_greatest_plausible_heap_addr;
+ register ptr_t least_ha = GC_least_plausible_heap_addr;
+# define GC_greatest_plausible_heap_addr greatest_ha
+# define GC_least_plausible_heap_addr least_ha
+
+ if (top == 0) return;
+ /* check all pointers in range and push if they appear */
+ /* to be valid. */
+ lim = t - 1 /* longword */;
+ for (p = b; p <= lim; p = (word *)(((char *)p) + ALIGNMENT)) {
+ q = *p;
+ GC_PUSH_ONE_STACK(q, p);
+ }
+# undef GC_greatest_plausible_heap_addr
+# undef GC_least_plausible_heap_addr
+}
+
+#ifndef THREADS
+/*
+ * A version of GC_push_all that treats all interior pointers as valid
+ * and scans part of the area immediately, to make sure that saved
+ * register values are not lost.
+ * Cold_gc_frame delimits the stack section that must be scanned
+ * eagerly. A zero value indicates that no eager scanning is needed.
+ */
+void GC_push_all_stack_partially_eager(bottom, top, cold_gc_frame)
+ptr_t bottom;
+ptr_t top;
+ptr_t cold_gc_frame;
+{
+ if (!NEED_FIXUP_POINTER && GC_all_interior_pointers) {
+# define EAGER_BYTES 1024
+ /* Push the hot end of the stack eagerly, so that register values */
+ /* saved inside GC frames are marked before they disappear. */
+ /* The rest of the marking can be deferred until later. */
+ if (0 == cold_gc_frame) {
+ GC_push_all_stack(bottom, top);
+ return;
+ }
+ GC_ASSERT(bottom <= cold_gc_frame && cold_gc_frame <= top);
+# ifdef STACK_GROWS_DOWN
+ GC_push_all(cold_gc_frame - sizeof(ptr_t), top);
+ GC_push_all_eager(bottom, cold_gc_frame);
+# else /* STACK_GROWS_UP */
+ GC_push_all(bottom, cold_gc_frame + sizeof(ptr_t));
+ GC_push_all_eager(cold_gc_frame, top);
+# endif /* STACK_GROWS_UP */
+ } else {
+ GC_push_all_eager(bottom, top);
+ }
+# ifdef TRACE_BUF
+ GC_add_trace_entry("GC_push_all_stack", bottom, top);
+# endif
+}
+#endif /* !THREADS */
+
+void GC_push_all_stack(bottom, top)
+ptr_t bottom;
+ptr_t top;
+{
+ if (!NEED_FIXUP_POINTER && GC_all_interior_pointers) {
+ GC_push_all(bottom, top);
+ } else {
+ GC_push_all_eager(bottom, top);
+ }
+}
+
+#if !defined(SMALL_CONFIG) && !defined(USE_MARK_BYTES)
+/* Push all objects reachable from marked objects in the given block */
+/* of size 1 objects. */
+void GC_push_marked1(h, hhdr)
+struct hblk *h;
+register hdr * hhdr;
+{
+ word * mark_word_addr = &(hhdr->hb_marks[0]);
+ register word *p;
+ word *plim;
+ register int i;
+ register word q;
+ register word mark_word;
+ register ptr_t greatest_ha = GC_greatest_plausible_heap_addr;
+ register ptr_t least_ha = GC_least_plausible_heap_addr;
+ register mse * mark_stack_top = GC_mark_stack_top;
+ register mse * mark_stack_limit = GC_mark_stack_limit;
+# define GC_mark_stack_top mark_stack_top
+# define GC_mark_stack_limit mark_stack_limit
+# define GC_greatest_plausible_heap_addr greatest_ha
+# define GC_least_plausible_heap_addr least_ha
+
+ p = (word *)(h->hb_body);
+ plim = (word *)(((word)h) + HBLKSIZE);
+
+ /* go through all words in block */
+ while( p < plim ) {
+ mark_word = *mark_word_addr++;
+ i = 0;
+ while(mark_word != 0) {
+ if (mark_word & 1) {
+ q = p[i];
+ GC_PUSH_ONE_HEAP(q, p + i);
+ }
+ i++;
+ mark_word >>= 1;
+ }
+ p += WORDSZ;
+ }
+# undef GC_greatest_plausible_heap_addr
+# undef GC_least_plausible_heap_addr
+# undef GC_mark_stack_top
+# undef GC_mark_stack_limit
+ GC_mark_stack_top = mark_stack_top;
+}
+
+
+#ifndef UNALIGNED
+
+/* Push all objects reachable from marked objects in the given block */
+/* of size 2 objects. */
+void GC_push_marked2(h, hhdr)
+struct hblk *h;
+register hdr * hhdr;
+{
+ word * mark_word_addr = &(hhdr->hb_marks[0]);
+ register word *p;
+ word *plim;
+ register int i;
+ register word q;
+ register word mark_word;
+ register ptr_t greatest_ha = GC_greatest_plausible_heap_addr;
+ register ptr_t least_ha = GC_least_plausible_heap_addr;
+ register mse * mark_stack_top = GC_mark_stack_top;
+ register mse * mark_stack_limit = GC_mark_stack_limit;
+# define GC_mark_stack_top mark_stack_top
+# define GC_mark_stack_limit mark_stack_limit
+# define GC_greatest_plausible_heap_addr greatest_ha
+# define GC_least_plausible_heap_addr least_ha
+
+ p = (word *)(h->hb_body);
+ plim = (word *)(((word)h) + HBLKSIZE);
+
+ /* go through all words in block */
+ while( p < plim ) {
+ mark_word = *mark_word_addr++;
+ i = 0;
+ while(mark_word != 0) {
+ if (mark_word & 1) {
+ q = p[i];
+ GC_PUSH_ONE_HEAP(q, p + i);
+ q = p[i+1];
+ GC_PUSH_ONE_HEAP(q, p + i);
+ }
+ i += 2;
+ mark_word >>= 2;
+ }
+ p += WORDSZ;
+ }
+# undef GC_greatest_plausible_heap_addr
+# undef GC_least_plausible_heap_addr
+# undef GC_mark_stack_top
+# undef GC_mark_stack_limit
+ GC_mark_stack_top = mark_stack_top;
+}
+
+/* Push all objects reachable from marked objects in the given block */
+/* of size 4 objects. */
+/* There is a risk of mark stack overflow here. But we handle that. */
+/* And only unmarked objects get pushed, so it's not very likely. */
+void GC_push_marked4(h, hhdr)
+struct hblk *h;
+register hdr * hhdr;
+{
+ word * mark_word_addr = &(hhdr->hb_marks[0]);
+ register word *p;
+ word *plim;
+ register int i;
+ register word q;
+ register word mark_word;
+ register ptr_t greatest_ha = GC_greatest_plausible_heap_addr;
+ register ptr_t least_ha = GC_least_plausible_heap_addr;
+ register mse * mark_stack_top = GC_mark_stack_top;
+ register mse * mark_stack_limit = GC_mark_stack_limit;
+# define GC_mark_stack_top mark_stack_top
+# define GC_mark_stack_limit mark_stack_limit
+# define GC_greatest_plausible_heap_addr greatest_ha
+# define GC_least_plausible_heap_addr least_ha
+
+ p = (word *)(h->hb_body);
+ plim = (word *)(((word)h) + HBLKSIZE);
+
+ /* go through all words in block */
+ while( p < plim ) {
+ mark_word = *mark_word_addr++;
+ i = 0;
+ while(mark_word != 0) {
+ if (mark_word & 1) {
+ q = p[i];
+ GC_PUSH_ONE_HEAP(q, p + i);
+ q = p[i+1];
+ GC_PUSH_ONE_HEAP(q, p + i + 1);
+ q = p[i+2];
+ GC_PUSH_ONE_HEAP(q, p + i + 2);
+ q = p[i+3];
+ GC_PUSH_ONE_HEAP(q, p + i + 3);
+ }
+ i += 4;
+ mark_word >>= 4;
+ }
+ p += WORDSZ;
+ }
+# undef GC_greatest_plausible_heap_addr
+# undef GC_least_plausible_heap_addr
+# undef GC_mark_stack_top
+# undef GC_mark_stack_limit
+ GC_mark_stack_top = mark_stack_top;
+}
+
+#endif /* UNALIGNED */
+
+#endif /* SMALL_CONFIG */
+
+/* Push all objects reachable from marked objects in the given block */
+void GC_push_marked(h, hhdr)
+struct hblk *h;
+register hdr * hhdr;
+{
+ register int sz = hhdr -> hb_sz;
+ register int descr = hhdr -> hb_descr;
+ register word * p;
+ register int word_no;
+ register word * lim;
+ register mse * GC_mark_stack_top_reg;
+ register mse * mark_stack_limit = GC_mark_stack_limit;
+
+ /* Some quick shortcuts: */
+ if ((0 | GC_DS_LENGTH) == descr) return;
+ if (GC_block_empty(hhdr)/* nothing marked */) return;
+ GC_n_rescuing_pages++;
+ GC_objects_are_marked = TRUE;
+ if (sz > MAXOBJSZ) {
+ lim = (word *)h;
+ } else {
+ lim = (word *)(h + 1) - sz;
+ }
+
+ switch(sz) {
+# if !defined(SMALL_CONFIG) && !defined(USE_MARK_BYTES)
+ case 1:
+ GC_push_marked1(h, hhdr);
+ break;
+# endif
+# if !defined(SMALL_CONFIG) && !defined(UNALIGNED) && \
+ !defined(USE_MARK_BYTES)
+ case 2:
+ GC_push_marked2(h, hhdr);
+ break;
+ case 4:
+ GC_push_marked4(h, hhdr);
+ break;
+# endif
+ default:
+ GC_mark_stack_top_reg = GC_mark_stack_top;
+ for (p = (word *)h, word_no = 0; p <= lim; p += sz, word_no += sz) {
+ if (mark_bit_from_hdr(hhdr, word_no)) {
+ /* Mark from fields inside the object */
+ PUSH_OBJ((word *)p, hhdr, GC_mark_stack_top_reg, mark_stack_limit);
+# ifdef GATHERSTATS
+ /* Subtract this object from total, since it was */
+ /* added in twice. */
+ GC_composite_in_use -= sz;
+# endif
+ }
+ }
+ GC_mark_stack_top = GC_mark_stack_top_reg;
+ }
+}
+
+#ifndef SMALL_CONFIG
+/* Test whether any page in the given block is dirty */
+GC_bool GC_block_was_dirty(h, hhdr)
+struct hblk *h;
+register hdr * hhdr;
+{
+ register int sz = hhdr -> hb_sz;
+
+ if (sz <= MAXOBJSZ) {
+ return(GC_page_was_dirty(h));
+ } else {
+ register ptr_t p = (ptr_t)h;
+ sz = WORDS_TO_BYTES(sz);
+ while (p < (ptr_t)h + sz) {
+ if (GC_page_was_dirty((struct hblk *)p)) return(TRUE);
+ p += HBLKSIZE;
+ }
+ return(FALSE);
+ }
+}
+#endif /* SMALL_CONFIG */
+
+/* Similar to GC_push_next_marked, but return address of next block */
+struct hblk * GC_push_next_marked(h)
+struct hblk *h;
+{
+ register hdr * hhdr;
+
+ h = GC_next_used_block(h);
+ if (h == 0) return(0);
+ hhdr = HDR(h);
+ GC_push_marked(h, hhdr);
+ return(h + OBJ_SZ_TO_BLOCKS(hhdr -> hb_sz));
+}
+
+#ifndef SMALL_CONFIG
+/* Identical to above, but mark only from dirty pages */
+struct hblk * GC_push_next_marked_dirty(h)
+struct hblk *h;
+{
+ register hdr * hhdr;
+
+ if (!GC_dirty_maintained) { ABORT("dirty bits not set up"); }
+ for (;;) {
+ h = GC_next_used_block(h);
+ if (h == 0) return(0);
+ hhdr = HDR(h);
+# ifdef STUBBORN_ALLOC
+ if (hhdr -> hb_obj_kind == STUBBORN) {
+ if (GC_page_was_changed(h) && GC_block_was_dirty(h, hhdr)) {
+ break;
+ }
+ } else {
+ if (GC_block_was_dirty(h, hhdr)) break;
+ }
+# else
+ if (GC_block_was_dirty(h, hhdr)) break;
+# endif
+ h += OBJ_SZ_TO_BLOCKS(hhdr -> hb_sz);
+ }
+ GC_push_marked(h, hhdr);
+ return(h + OBJ_SZ_TO_BLOCKS(hhdr -> hb_sz));
+}
+#endif
+
+/* Similar to above, but for uncollectable pages. Needed since we */
+/* do not clear marks for such pages, even for full collections. */
+struct hblk * GC_push_next_marked_uncollectable(h)
+struct hblk *h;
+{
+ register hdr * hhdr = HDR(h);
+
+ for (;;) {
+ h = GC_next_used_block(h);
+ if (h == 0) return(0);
+ hhdr = HDR(h);
+ if (hhdr -> hb_obj_kind == UNCOLLECTABLE) break;
+ h += OBJ_SZ_TO_BLOCKS(hhdr -> hb_sz);
+ }
+ GC_push_marked(h, hhdr);
+ return(h + OBJ_SZ_TO_BLOCKS(hhdr -> hb_sz));
+}
+
+
Added: llvm-gcc-4.2/trunk/boehm-gc/mark_rts.c
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/mark_rts.c?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/mark_rts.c (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/mark_rts.c Thu Nov 8 16:56:19 2007
@@ -0,0 +1,652 @@
+/*
+ * Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
+ * Copyright (c) 1991-1994 by Xerox Corporation. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ */
+# include <stdio.h>
+# include "private/gc_priv.h"
+
+/* Data structure for list of root sets. */
+/* We keep a hash table, so that we can filter out duplicate additions. */
+/* Under Win32, we need to do a better job of filtering overlaps, so */
+/* we resort to sequential search, and pay the price. */
+/* This is really declared in gc_priv.h:
+struct roots {
+ ptr_t r_start;
+ ptr_t r_end;
+ # if !defined(MSWIN32) && !defined(MSWINCE)
+ struct roots * r_next;
+ # endif
+ GC_bool r_tmp;
+ -- Delete before registering new dynamic libraries
+};
+
+struct roots GC_static_roots[MAX_ROOT_SETS];
+*/
+
+int GC_no_dls = 0; /* Register dynamic library data segments. */
+
+static int n_root_sets = 0;
+
+ /* GC_static_roots[0..n_root_sets) contains the valid root sets. */
+
+# if !defined(NO_DEBUGGING)
+/* For debugging: */
+void GC_print_static_roots()
+{
+ register int i;
+ size_t total = 0;
+
+ for (i = 0; i < n_root_sets; i++) {
+ GC_printf2("From 0x%lx to 0x%lx ",
+ (unsigned long) GC_static_roots[i].r_start,
+ (unsigned long) GC_static_roots[i].r_end);
+ if (GC_static_roots[i].r_tmp) {
+ GC_printf0(" (temporary)\n");
+ } else {
+ GC_printf0("\n");
+ }
+ total += GC_static_roots[i].r_end - GC_static_roots[i].r_start;
+ }
+ GC_printf1("Total size: %ld\n", (unsigned long) total);
+ if (GC_root_size != total) {
+ GC_printf1("GC_root_size incorrect: %ld!!\n",
+ (unsigned long) GC_root_size);
+ }
+}
+# endif /* NO_DEBUGGING */
+
+/* Primarily for debugging support: */
+/* Is the address p in one of the registered static */
+/* root sections? */
+GC_bool GC_is_static_root(p)
+ptr_t p;
+{
+ static int last_root_set = MAX_ROOT_SETS;
+ register int i;
+
+
+ if (last_root_set < n_root_sets
+ && p >= GC_static_roots[last_root_set].r_start
+ && p < GC_static_roots[last_root_set].r_end) return(TRUE);
+ for (i = 0; i < n_root_sets; i++) {
+ if (p >= GC_static_roots[i].r_start
+ && p < GC_static_roots[i].r_end) {
+ last_root_set = i;
+ return(TRUE);
+ }
+ }
+ return(FALSE);
+}
+
+#if !defined(MSWIN32) && !defined(MSWINCE)
+/*
+# define LOG_RT_SIZE 6
+# define RT_SIZE (1 << LOG_RT_SIZE) -- Power of 2, may be != MAX_ROOT_SETS
+
+ struct roots * GC_root_index[RT_SIZE];
+ -- Hash table header. Used only to check whether a range is
+ -- already present.
+ -- really defined in gc_priv.h
+*/
+
+static int rt_hash(addr)
+char * addr;
+{
+ word result = (word) addr;
+# if CPP_WORDSZ > 8*LOG_RT_SIZE
+ result ^= result >> 8*LOG_RT_SIZE;
+# endif
+# if CPP_WORDSZ > 4*LOG_RT_SIZE
+ result ^= result >> 4*LOG_RT_SIZE;
+# endif
+ result ^= result >> 2*LOG_RT_SIZE;
+ result ^= result >> LOG_RT_SIZE;
+ result &= (RT_SIZE-1);
+ return(result);
+}
+
+/* Is a range starting at b already in the table? If so return a */
+/* pointer to it, else NIL. */
+struct roots * GC_roots_present(b)
+char *b;
+{
+ register int h = rt_hash(b);
+ register struct roots *p = GC_root_index[h];
+
+ while (p != 0) {
+ if (p -> r_start == (ptr_t)b) return(p);
+ p = p -> r_next;
+ }
+ return(FALSE);
+}
+
+/* Add the given root structure to the index. */
+static void add_roots_to_index(p)
+struct roots *p;
+{
+ register int h = rt_hash(p -> r_start);
+
+ p -> r_next = GC_root_index[h];
+ GC_root_index[h] = p;
+}
+
+# else /* MSWIN32 || MSWINCE */
+
+# define add_roots_to_index(p)
+
+# endif
+
+
+
+
+word GC_root_size = 0;
+
+void GC_add_roots(b, e)
+char * b; char * e;
+{
+ DCL_LOCK_STATE;
+
+ DISABLE_SIGNALS();
+ LOCK();
+ GC_add_roots_inner(b, e, FALSE);
+ UNLOCK();
+ ENABLE_SIGNALS();
+}
+
+
+/* Add [b,e) to the root set. Adding the same interval a second time */
+/* is a moderately fast noop, and hence benign. We do not handle */
+/* different but overlapping intervals efficiently. (We do handle */
+/* them correctly.) */
+/* Tmp specifies that the interval may be deleted before */
+/* reregistering dynamic libraries. */
+void GC_add_roots_inner(b, e, tmp)
+char * b; char * e;
+GC_bool tmp;
+{
+ struct roots * old;
+
+# if defined(MSWIN32) || defined(MSWINCE)
+ /* Spend the time to ensure that there are no overlapping */
+ /* or adjacent intervals. */
+ /* This could be done faster with e.g. a */
+ /* balanced tree. But the execution time here is */
+ /* virtually guaranteed to be dominated by the time it */
+ /* takes to scan the roots. */
+ {
+ register int i;
+
+ for (i = 0; i < n_root_sets; i++) {
+ old = GC_static_roots + i;
+ if ((ptr_t)b <= old -> r_end && (ptr_t)e >= old -> r_start) {
+ if ((ptr_t)b < old -> r_start) {
+ old -> r_start = (ptr_t)b;
+ GC_root_size += (old -> r_start - (ptr_t)b);
+ }
+ if ((ptr_t)e > old -> r_end) {
+ old -> r_end = (ptr_t)e;
+ GC_root_size += ((ptr_t)e - old -> r_end);
+ }
+ old -> r_tmp &= tmp;
+ break;
+ }
+ }
+ if (i < n_root_sets) {
+ /* merge other overlapping intervals */
+ struct roots *other;
+
+ for (i++; i < n_root_sets; i++) {
+ other = GC_static_roots + i;
+ b = (char *)(other -> r_start);
+ e = (char *)(other -> r_end);
+ if ((ptr_t)b <= old -> r_end && (ptr_t)e >= old -> r_start) {
+ if ((ptr_t)b < old -> r_start) {
+ old -> r_start = (ptr_t)b;
+ GC_root_size += (old -> r_start - (ptr_t)b);
+ }
+ if ((ptr_t)e > old -> r_end) {
+ old -> r_end = (ptr_t)e;
+ GC_root_size += ((ptr_t)e - old -> r_end);
+ }
+ old -> r_tmp &= other -> r_tmp;
+ /* Delete this entry. */
+ GC_root_size -= (other -> r_end - other -> r_start);
+ other -> r_start = GC_static_roots[n_root_sets-1].r_start;
+ other -> r_end = GC_static_roots[n_root_sets-1].r_end;
+ n_root_sets--;
+ }
+ }
+ return;
+ }
+ }
+# else
+ old = GC_roots_present(b);
+ if (old != 0) {
+ if ((ptr_t)e <= old -> r_end) /* already there */ return;
+ /* else extend */
+ GC_root_size += (ptr_t)e - old -> r_end;
+ old -> r_end = (ptr_t)e;
+ return;
+ }
+# endif
+ if (n_root_sets == MAX_ROOT_SETS) {
+ ABORT("Too many root sets\n");
+ }
+ GC_static_roots[n_root_sets].r_start = (ptr_t)b;
+ GC_static_roots[n_root_sets].r_end = (ptr_t)e;
+ GC_static_roots[n_root_sets].r_tmp = tmp;
+# if !defined(MSWIN32) && !defined(MSWINCE)
+ GC_static_roots[n_root_sets].r_next = 0;
+# endif
+ add_roots_to_index(GC_static_roots + n_root_sets);
+ GC_root_size += (ptr_t)e - (ptr_t)b;
+ n_root_sets++;
+}
+
+static GC_bool roots_were_cleared = FALSE;
+
+void GC_clear_roots GC_PROTO((void))
+{
+ DCL_LOCK_STATE;
+
+ DISABLE_SIGNALS();
+ LOCK();
+ roots_were_cleared = TRUE;
+ n_root_sets = 0;
+ GC_root_size = 0;
+# if !defined(MSWIN32) && !defined(MSWINCE)
+ {
+ register int i;
+
+ for (i = 0; i < RT_SIZE; i++) GC_root_index[i] = 0;
+ }
+# endif
+ UNLOCK();
+ ENABLE_SIGNALS();
+}
+
+/* Internal use only; lock held. */
+static void GC_remove_root_at_pos(i)
+int i;
+{
+ GC_root_size -= (GC_static_roots[i].r_end - GC_static_roots[i].r_start);
+ GC_static_roots[i].r_start = GC_static_roots[n_root_sets-1].r_start;
+ GC_static_roots[i].r_end = GC_static_roots[n_root_sets-1].r_end;
+ GC_static_roots[i].r_tmp = GC_static_roots[n_root_sets-1].r_tmp;
+ n_root_sets--;
+}
+
+#if !defined(MSWIN32) && !defined(MSWINCE)
+static void GC_rebuild_root_index()
+{
+ register int i;
+
+ for (i = 0; i < RT_SIZE; i++) GC_root_index[i] = 0;
+ for (i = 0; i < n_root_sets; i++)
+ add_roots_to_index(GC_static_roots + i);
+}
+#endif
+
+/* Internal use only; lock held. */
+void GC_remove_tmp_roots()
+{
+ register int i;
+
+ for (i = 0; i < n_root_sets; ) {
+ if (GC_static_roots[i].r_tmp) {
+ GC_remove_root_at_pos(i);
+ } else {
+ i++;
+ }
+ }
+ #if !defined(MSWIN32) && !defined(MSWINCE)
+ GC_rebuild_root_index();
+ #endif
+}
+
+#if !defined(MSWIN32) && !defined(MSWINCE)
+void GC_remove_roots(b, e)
+char * b; char * e;
+{
+ DCL_LOCK_STATE;
+
+ DISABLE_SIGNALS();
+ LOCK();
+ GC_remove_roots_inner(b, e);
+ UNLOCK();
+ ENABLE_SIGNALS();
+}
+
+/* Should only be called when the lock is held */
+void GC_remove_roots_inner(b,e)
+char * b; char * e;
+{
+ int i;
+ for (i = 0; i < n_root_sets; ) {
+ if (GC_static_roots[i].r_start >= (ptr_t)b && GC_static_roots[i].r_end <= (ptr_t)e) {
+ GC_remove_root_at_pos(i);
+ } else {
+ i++;
+ }
+ }
+ GC_rebuild_root_index();
+}
+#endif /* !defined(MSWIN32) && !defined(MSWINCE) */
+
+#if defined(MSWIN32) || defined(_WIN32_WCE_EMULATION)
+/* Workaround for the OS mapping and unmapping behind our back: */
+/* Is the address p in one of the temporary static root sections? */
+GC_bool GC_is_tmp_root(p)
+ptr_t p;
+{
+ static int last_root_set = MAX_ROOT_SETS;
+ register int i;
+
+ if (last_root_set < n_root_sets
+ && p >= GC_static_roots[last_root_set].r_start
+ && p < GC_static_roots[last_root_set].r_end)
+ return GC_static_roots[last_root_set].r_tmp;
+ for (i = 0; i < n_root_sets; i++) {
+ if (p >= GC_static_roots[i].r_start
+ && p < GC_static_roots[i].r_end) {
+ last_root_set = i;
+ return GC_static_roots[i].r_tmp;
+ }
+ }
+ return(FALSE);
+}
+#endif /* MSWIN32 || _WIN32_WCE_EMULATION */
+
+ptr_t GC_approx_sp()
+{
+ VOLATILE word dummy;
+
+ dummy = 42; /* Force stack to grow if necessary. Otherwise the */
+ /* later accesses might cause the kernel to think we're */
+ /* doing something wrong. */
+# ifdef _MSC_VER
+# pragma warning(disable:4172)
+# endif
+ return((ptr_t)(&dummy));
+# ifdef _MSC_VER
+# pragma warning(default:4172)
+# endif
+}
+
+/*
+ * Data structure for excluded static roots.
+ * Real declaration is in gc_priv.h.
+
+struct exclusion {
+ ptr_t e_start;
+ ptr_t e_end;
+};
+
+struct exclusion GC_excl_table[MAX_EXCLUSIONS];
+ -- Array of exclusions, ascending
+ -- address order.
+*/
+
+size_t GC_excl_table_entries = 0; /* Number of entries in use. */
+
+/* Return the first exclusion range that includes an address >= start_addr */
+/* Assumes the exclusion table contains at least one entry (namely the */
+/* GC data structures). */
+struct exclusion * GC_next_exclusion(start_addr)
+ptr_t start_addr;
+{
+ size_t low = 0;
+ size_t high = GC_excl_table_entries - 1;
+ size_t mid;
+
+ while (high > low) {
+ mid = (low + high) >> 1;
+ /* low <= mid < high */
+ if ((word) GC_excl_table[mid].e_end <= (word) start_addr) {
+ low = mid + 1;
+ } else {
+ high = mid;
+ }
+ }
+ if ((word) GC_excl_table[low].e_end <= (word) start_addr) return 0;
+ return GC_excl_table + low;
+}
+
+void GC_exclude_static_roots(start, finish)
+GC_PTR start;
+GC_PTR finish;
+{
+ struct exclusion * next;
+ size_t next_index, i;
+
+ if (0 == GC_excl_table_entries) {
+ next = 0;
+ } else {
+ next = GC_next_exclusion(start);
+ }
+ if (0 != next) {
+ if ((word)(next -> e_start) < (word) finish) {
+ /* incomplete error check. */
+ ABORT("exclusion ranges overlap");
+ }
+ if ((word)(next -> e_start) == (word) finish) {
+ /* extend old range backwards */
+ next -> e_start = (ptr_t)start;
+ return;
+ }
+ next_index = next - GC_excl_table;
+ for (i = GC_excl_table_entries; i > next_index; --i) {
+ GC_excl_table[i] = GC_excl_table[i-1];
+ }
+ } else {
+ next_index = GC_excl_table_entries;
+ }
+ if (GC_excl_table_entries == MAX_EXCLUSIONS) ABORT("Too many exclusions");
+ GC_excl_table[next_index].e_start = (ptr_t)start;
+ GC_excl_table[next_index].e_end = (ptr_t)finish;
+ ++GC_excl_table_entries;
+}
+
+/* Invoke push_conditional on ranges that are not excluded. */
+void GC_push_conditional_with_exclusions(bottom, top, all)
+ptr_t bottom;
+ptr_t top;
+int all;
+{
+ struct exclusion * next;
+ ptr_t excl_start;
+
+ while (bottom < top) {
+ next = GC_next_exclusion(bottom);
+ if (0 == next || (excl_start = next -> e_start) >= top) {
+ GC_push_conditional(bottom, top, all);
+ return;
+ }
+ if (excl_start > bottom) GC_push_conditional(bottom, excl_start, all);
+ bottom = next -> e_end;
+ }
+}
+
+/*
+ * In the absence of threads, push the stack contents.
+ * In the presence of threads, push enough of the current stack
+ * to ensure that callee-save registers saved in collector frames have been
+ * seen.
+ */
+void GC_push_current_stack(cold_gc_frame)
+ptr_t cold_gc_frame;
+{
+# if defined(THREADS)
+ if (0 == cold_gc_frame) return;
+# ifdef STACK_GROWS_DOWN
+ GC_push_all_eager(GC_approx_sp(), cold_gc_frame);
+ /* For IA64, the register stack backing store is handled */
+ /* in the thread-specific code. */
+# else
+ GC_push_all_eager( cold_gc_frame, GC_approx_sp() );
+# endif
+# else
+# ifdef STACK_GROWS_DOWN
+ GC_push_all_stack_partially_eager( GC_approx_sp(), GC_stackbottom,
+ cold_gc_frame );
+# ifdef IA64
+ /* We also need to push the register stack backing store. */
+ /* This should really be done in the same way as the */
+ /* regular stack. For now we fudge it a bit. */
+ /* Note that the backing store grows up, so we can't use */
+ /* GC_push_all_stack_partially_eager. */
+ {
+ extern word GC_save_regs_ret_val;
+ /* Previously set to backing store pointer. */
+ ptr_t bsp = (ptr_t) GC_save_regs_ret_val;
+ ptr_t cold_gc_bs_pointer;
+ if (GC_all_interior_pointers) {
+ cold_gc_bs_pointer = bsp - 2048;
+ if (cold_gc_bs_pointer < BACKING_STORE_BASE) {
+ cold_gc_bs_pointer = BACKING_STORE_BASE;
+ } else {
+ GC_push_all_stack(BACKING_STORE_BASE, cold_gc_bs_pointer);
+ }
+ } else {
+ cold_gc_bs_pointer = BACKING_STORE_BASE;
+ }
+ GC_push_all_eager(cold_gc_bs_pointer, bsp);
+ /* All values should be sufficiently aligned that we */
+ /* dont have to worry about the boundary. */
+ }
+# endif
+# else
+ GC_push_all_stack_partially_eager( GC_stackbottom, GC_approx_sp(),
+ cold_gc_frame );
+# endif
+# endif /* !THREADS */
+}
+
+/*
+ * Push GC internal roots. Only called if there is some reason to believe
+ * these would not otherwise get registered.
+ */
+void GC_push_gc_structures GC_PROTO((void))
+{
+ GC_push_finalizer_structures();
+ GC_push_stubborn_structures();
+# if defined(THREADS)
+ GC_push_thread_structures();
+# endif
+}
+
+#ifdef THREAD_LOCAL_ALLOC
+ void GC_mark_thread_local_free_lists();
+#endif
+
+void GC_cond_register_dynamic_libraries()
+{
+# if (defined(DYNAMIC_LOADING) || defined(MSWIN32) || defined(MSWINCE) \
+ || defined(PCR)) && !defined(SRC_M3)
+ GC_remove_tmp_roots();
+ if (!GC_no_dls) GC_register_dynamic_libraries();
+# else
+ GC_no_dls = TRUE;
+# endif
+}
+
+/*
+ * Call the mark routines (GC_tl_push for a single pointer, GC_push_conditional
+ * on groups of pointers) on every top level accessible pointer.
+ * If all is FALSE, arrange to push only possibly altered values.
+ * Cold_gc_frame is an address inside a GC frame that
+ * remains valid until all marking is complete.
+ * A zero value indicates that it's OK to miss some
+ * register values.
+ */
+void GC_push_roots(all, cold_gc_frame)
+GC_bool all;
+ptr_t cold_gc_frame;
+{
+ int i;
+ int kind;
+
+ /*
+ * Next push static data. This must happen early on, since it's
+ * not robust against mark stack overflow.
+ */
+ /* Reregister dynamic libraries, in case one got added. */
+ /* There is some argument for doing this as late as possible, */
+ /* especially on win32, where it can change asynchronously. */
+ /* In those cases, we do it here. But on other platforms, it's */
+ /* not safe with the world stopped, so we do it earlier. */
+# if !defined(REGISTER_LIBRARIES_EARLY)
+ GC_cond_register_dynamic_libraries();
+# endif
+
+ /* Mark everything in static data areas */
+ for (i = 0; i < n_root_sets; i++) {
+ GC_push_conditional_with_exclusions(
+ GC_static_roots[i].r_start,
+ GC_static_roots[i].r_end, all);
+ }
+
+ /* Mark all free list header blocks, if those were allocated from */
+ /* the garbage collected heap. This makes sure they don't */
+ /* disappear if we are not marking from static data. It also */
+ /* saves us the trouble of scanning them, and possibly that of */
+ /* marking the freelists. */
+ for (kind = 0; kind < GC_n_kinds; kind++) {
+ GC_PTR base = GC_base(GC_obj_kinds[kind].ok_freelist);
+ if (0 != base) {
+ GC_set_mark_bit(base);
+ }
+ }
+
+ /* Mark from GC internal roots if those might otherwise have */
+ /* been excluded. */
+ if (GC_no_dls || roots_were_cleared) {
+ GC_push_gc_structures();
+ }
+
+ /* Mark thread local free lists, even if their mark */
+ /* descriptor excludes the link field. */
+ /* If the world is not stopped, this is unsafe. It is */
+ /* also unnecessary, since we will do this again with the */
+ /* world stopped. */
+# ifdef THREAD_LOCAL_ALLOC
+ if (GC_world_stopped) GC_mark_thread_local_free_lists();
+# endif
+
+ /*
+ * Now traverse stacks, and mark from register contents.
+ * These must be done last, since they can legitimately overflow
+ * the mark stack.
+ */
+# ifdef USE_GENERIC_PUSH_REGS
+ GC_generic_push_regs(cold_gc_frame);
+ /* Also pushes stack, so that we catch callee-save registers */
+ /* saved inside the GC_push_regs frame. */
+# else
+ /*
+ * push registers - i.e., call GC_push_one(r) for each
+ * register contents r.
+ */
+ GC_push_regs(); /* usually defined in machine_dep.c */
+ GC_push_current_stack(cold_gc_frame);
+ /* In the threads case, this only pushes collector frames. */
+ /* In the case of linux threads on IA64, the hot section of */
+ /* the main stack is marked here, but the register stack */
+ /* backing store is handled in the threads-specific code. */
+# endif
+ if (GC_push_other_roots != 0) (*GC_push_other_roots)();
+ /* In the threads case, this also pushes thread stacks. */
+ /* Note that without interior pointer recognition lots */
+ /* of stuff may have been pushed already, and this */
+ /* should be careful about mark stack overflows. */
+}
+
Added: llvm-gcc-4.2/trunk/boehm-gc/mips_sgi_mach_dep.s
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/mips_sgi_mach_dep.s?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/mips_sgi_mach_dep.s (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/mips_sgi_mach_dep.s Thu Nov 8 16:56:19 2007
@@ -0,0 +1,46 @@
+#include <sys/regdef.h>
+#include <sys/asm.h>
+/* This file must be preprocessed. But the SGI assembler always does */
+/* that. Furthermore, a generic preprocessor won't do, since some of */
+/* the SGI-supplied include files rely on behavior of the MIPS */
+/* assembler. Hence we treat and name this file as though it required */
+/* no preprocessing. */
+
+# define call_push(x) move $4,x; jal GC_push_one
+
+ .option pic2
+ .text
+/* Mark from machine registers that are saved by C compiler */
+# define FRAMESZ 32
+# define RAOFF FRAMESZ-SZREG
+# define GPOFF FRAMESZ-(2*SZREG)
+ NESTED(GC_push_regs, FRAMESZ, ra)
+ .mask 0x80000000,-SZREG # inform debugger of saved ra loc
+ move t0,gp
+ SETUP_GPX(t8)
+ PTR_SUBU sp,FRAMESZ
+# ifdef SETUP_GP64
+ SETUP_GP64(GPOFF, GC_push_regs)
+# endif
+ SAVE_GP(GPOFF)
+ REG_S ra,RAOFF(sp)
+# if (_MIPS_SIM == _ABIO32)
+ call_push($2)
+ call_push($3)
+# endif
+ call_push($16)
+ call_push($17)
+ call_push($18)
+ call_push($19)
+ call_push($20)
+ call_push($21)
+ call_push($22)
+ call_push($23)
+ call_push($30)
+ REG_L ra,RAOFF(sp)
+# ifdef RESTORE_GP64
+ RESTORE_GP64
+# endif
+ PTR_ADDU sp,FRAMESZ
+ j ra
+ .end GC_push_regs
Added: llvm-gcc-4.2/trunk/boehm-gc/mips_ultrix_mach_dep.s
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/mips_ultrix_mach_dep.s?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/mips_ultrix_mach_dep.s (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/mips_ultrix_mach_dep.s Thu Nov 8 16:56:19 2007
@@ -0,0 +1,26 @@
+# define call_push(x) move $4,x; jal GC_push_one
+
+ .text
+ # Mark from machine registers that are saved by C compiler
+ .globl GC_push_regs
+ .ent GC_push_regs
+GC_push_regs:
+ subu $sp,8 ## Need to save only return address
+ sw $31,4($sp)
+ .mask 0x80000000,-4
+ .frame $sp,8,$31
+ call_push($2)
+ call_push($3)
+ call_push($16)
+ call_push($17)
+ call_push($18)
+ call_push($19)
+ call_push($20)
+ call_push($21)
+ call_push($22)
+ call_push($23)
+ call_push($30)
+ lw $31,4($sp)
+ addu $sp,8
+ j $31
+ .end GC_push_regs
Added: llvm-gcc-4.2/trunk/boehm-gc/misc.c
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/misc.c?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/misc.c (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/misc.c Thu Nov 8 16:56:19 2007
@@ -0,0 +1,1185 @@
+/*
+ * Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
+ * Copyright (c) 1991-1994 by Xerox Corporation. All rights reserved.
+ * Copyright (c) 1999-2001 by Hewlett-Packard Company. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ */
+/* Boehm, July 31, 1995 5:02 pm PDT */
+
+
+#include <stdio.h>
+#include <limits.h>
+#ifndef _WIN32_WCE
+#include <signal.h>
+#endif
+
+#define I_HIDE_POINTERS /* To make GC_call_with_alloc_lock visible */
+#include "private/gc_pmark.h"
+
+#ifdef GC_SOLARIS_THREADS
+# include <sys/syscall.h>
+#endif
+#if defined(MSWIN32) || defined(MSWINCE)
+# define WIN32_LEAN_AND_MEAN
+# define NOSERVICE
+# include <windows.h>
+# include <tchar.h>
+#endif
+
+# ifdef THREADS
+# ifdef PCR
+# include "il/PCR_IL.h"
+ PCR_Th_ML GC_allocate_ml;
+# else
+# ifdef SRC_M3
+ /* Critical section counter is defined in the M3 runtime */
+ /* That's all we use. */
+# else
+# ifdef GC_SOLARIS_THREADS
+ mutex_t GC_allocate_ml; /* Implicitly initialized. */
+# else
+# if defined(GC_WIN32_THREADS)
+# if defined(GC_PTHREADS)
+ pthread_mutex_t GC_allocate_ml = PTHREAD_MUTEX_INITIALIZER;
+# elif defined(GC_DLL)
+ __declspec(dllexport) CRITICAL_SECTION GC_allocate_ml;
+# else
+ CRITICAL_SECTION GC_allocate_ml;
+# endif
+# else
+# if defined(GC_PTHREADS) && !defined(GC_SOLARIS_THREADS)
+# if defined(USE_SPIN_LOCK)
+ pthread_t GC_lock_holder = NO_THREAD;
+# else
+ pthread_mutex_t GC_allocate_ml = PTHREAD_MUTEX_INITIALIZER;
+ pthread_t GC_lock_holder = NO_THREAD;
+ /* Used only for assertions, and to prevent */
+ /* recursive reentry in the system call wrapper. */
+# endif
+# else
+ --> declare allocator lock here
+# endif
+# endif
+# endif
+# endif
+# endif
+# endif
+
+#if defined(NOSYS) || defined(ECOS)
+#undef STACKBASE
+#endif
+
+/* Dont unnecessarily call GC_register_main_static_data() in case */
+/* dyn_load.c isn't linked in. */
+#ifdef DYNAMIC_LOADING
+# define GC_REGISTER_MAIN_STATIC_DATA() GC_register_main_static_data()
+#else
+# define GC_REGISTER_MAIN_STATIC_DATA() TRUE
+#endif
+
+GC_FAR struct _GC_arrays GC_arrays /* = { 0 } */;
+
+
+GC_bool GC_debugging_started = FALSE;
+ /* defined here so we don't have to load debug_malloc.o */
+
+void (*GC_check_heap) GC_PROTO((void)) = (void (*) GC_PROTO((void)))0;
+void (*GC_print_all_smashed) GC_PROTO((void)) = (void (*) GC_PROTO((void)))0;
+
+void (*GC_start_call_back) GC_PROTO((void)) = (void (*) GC_PROTO((void)))0;
+
+ptr_t GC_stackbottom = 0;
+
+#ifdef IA64
+ ptr_t GC_register_stackbottom = 0;
+#endif
+
+GC_bool GC_dont_gc = 0;
+
+GC_bool GC_dont_precollect = 0;
+
+GC_bool GC_quiet = 0;
+
+GC_bool GC_print_stats = 0;
+
+GC_bool GC_print_back_height = 0;
+
+#ifndef NO_DEBUGGING
+ GC_bool GC_dump_regularly = 0; /* Generate regular debugging dumps. */
+#endif
+
+#ifdef KEEP_BACK_PTRS
+ long GC_backtraces = 0; /* Number of random backtraces to */
+ /* generate for each GC. */
+#endif
+
+#ifdef FIND_LEAK
+ int GC_find_leak = 1;
+#else
+ int GC_find_leak = 0;
+#endif
+
+#ifdef ALL_INTERIOR_POINTERS
+ int GC_all_interior_pointers = 1;
+#else
+ int GC_all_interior_pointers = 0;
+#endif
+
+long GC_large_alloc_warn_interval = 5;
+ /* Interval between unsuppressed warnings. */
+
+long GC_large_alloc_warn_suppressed = 0;
+ /* Number of warnings suppressed so far. */
+
+/*ARGSUSED*/
+GC_PTR GC_default_oom_fn GC_PROTO((size_t bytes_requested))
+{
+ return(0);
+}
+
+GC_PTR (*GC_oom_fn) GC_PROTO((size_t bytes_requested)) = GC_default_oom_fn;
+
+extern signed_word GC_mem_found;
+
+void * GC_project2(arg1, arg2)
+void *arg1;
+void *arg2;
+{
+ return arg2;
+}
+
+# ifdef MERGE_SIZES
+ /* Set things up so that GC_size_map[i] >= words(i), */
+ /* but not too much bigger */
+ /* and so that size_map contains relatively few distinct entries */
+ /* This is stolen from Russ Atkinson's Cedar quantization */
+ /* alogrithm (but we precompute it). */
+
+
+ void GC_init_size_map()
+ {
+ register unsigned i;
+
+ /* Map size 0 to something bigger. */
+ /* This avoids problems at lower levels. */
+ /* One word objects don't have to be 2 word aligned, */
+ /* unless we're using mark bytes. */
+ for (i = 0; i < sizeof(word); i++) {
+ GC_size_map[i] = MIN_WORDS;
+ }
+# if MIN_WORDS > 1
+ GC_size_map[sizeof(word)] = MIN_WORDS;
+# else
+ GC_size_map[sizeof(word)] = ROUNDED_UP_WORDS(sizeof(word));
+# endif
+ for (i = sizeof(word) + 1; i <= 8 * sizeof(word); i++) {
+ GC_size_map[i] = ALIGNED_WORDS(i);
+ }
+ for (i = 8*sizeof(word) + 1; i <= 16 * sizeof(word); i++) {
+ GC_size_map[i] = (ROUNDED_UP_WORDS(i) + 1) & (~1);
+ }
+# ifdef GC_GCJ_SUPPORT
+ /* Make all sizes up to 32 words predictable, so that a */
+ /* compiler can statically perform the same computation, */
+ /* or at least a computation that results in similar size */
+ /* classes. */
+ for (i = 16*sizeof(word) + 1; i <= 32 * sizeof(word); i++) {
+ GC_size_map[i] = (ROUNDED_UP_WORDS(i) + 3) & (~3);
+ }
+# endif
+ /* We leave the rest of the array to be filled in on demand. */
+ }
+
+ /* Fill in additional entries in GC_size_map, including the ith one */
+ /* We assume the ith entry is currently 0. */
+ /* Note that a filled in section of the array ending at n always */
+ /* has length at least n/4. */
+ void GC_extend_size_map(i)
+ word i;
+ {
+ word orig_word_sz = ROUNDED_UP_WORDS(i);
+ word word_sz = orig_word_sz;
+ register word byte_sz = WORDS_TO_BYTES(word_sz);
+ /* The size we try to preserve. */
+ /* Close to to i, unless this would */
+ /* introduce too many distinct sizes. */
+ word smaller_than_i = byte_sz - (byte_sz >> 3);
+ word much_smaller_than_i = byte_sz - (byte_sz >> 2);
+ register word low_limit; /* The lowest indexed entry we */
+ /* initialize. */
+ register word j;
+
+ if (GC_size_map[smaller_than_i] == 0) {
+ low_limit = much_smaller_than_i;
+ while (GC_size_map[low_limit] != 0) low_limit++;
+ } else {
+ low_limit = smaller_than_i + 1;
+ while (GC_size_map[low_limit] != 0) low_limit++;
+ word_sz = ROUNDED_UP_WORDS(low_limit);
+ word_sz += word_sz >> 3;
+ if (word_sz < orig_word_sz) word_sz = orig_word_sz;
+ }
+# ifdef ALIGN_DOUBLE
+ word_sz += 1;
+ word_sz &= ~1;
+# endif
+ if (word_sz > MAXOBJSZ) {
+ word_sz = MAXOBJSZ;
+ }
+ /* If we can fit the same number of larger objects in a block, */
+ /* do so. */
+ {
+ size_t number_of_objs = BODY_SZ/word_sz;
+ word_sz = BODY_SZ/number_of_objs;
+# ifdef ALIGN_DOUBLE
+ word_sz &= ~1;
+# endif
+ }
+ byte_sz = WORDS_TO_BYTES(word_sz);
+ if (GC_all_interior_pointers) {
+ /* We need one extra byte; don't fill in GC_size_map[byte_sz] */
+ byte_sz -= EXTRA_BYTES;
+ }
+
+ for (j = low_limit; j <= byte_sz; j++) GC_size_map[j] = word_sz;
+ }
+# endif
+
+
+/*
+ * The following is a gross hack to deal with a problem that can occur
+ * on machines that are sloppy about stack frame sizes, notably SPARC.
+ * Bogus pointers may be written to the stack and not cleared for
+ * a LONG time, because they always fall into holes in stack frames
+ * that are not written. We partially address this by clearing
+ * sections of the stack whenever we get control.
+ */
+word GC_stack_last_cleared = 0; /* GC_no when we last did this */
+# ifdef THREADS
+# define BIG_CLEAR_SIZE 2048 /* Clear this much now and then. */
+# define SMALL_CLEAR_SIZE 256 /* Clear this much every time. */
+# endif
+# define CLEAR_SIZE 213 /* Granularity for GC_clear_stack_inner */
+# define DEGRADE_RATE 50
+
+word GC_min_sp; /* Coolest stack pointer value from which we've */
+ /* already cleared the stack. */
+
+word GC_high_water;
+ /* "hottest" stack pointer value we have seen */
+ /* recently. Degrades over time. */
+
+word GC_words_allocd_at_reset;
+
+#if defined(ASM_CLEAR_CODE)
+ extern ptr_t GC_clear_stack_inner();
+#else
+/* Clear the stack up to about limit. Return arg. */
+/*ARGSUSED*/
+ptr_t GC_clear_stack_inner(arg, limit)
+ptr_t arg;
+word limit;
+{
+ word dummy[CLEAR_SIZE];
+
+ BZERO(dummy, CLEAR_SIZE*sizeof(word));
+ if ((word)(dummy) COOLER_THAN limit) {
+ (void) GC_clear_stack_inner(arg, limit);
+ }
+ /* Make sure the recursive call is not a tail call, and the bzero */
+ /* call is not recognized as dead code. */
+ GC_noop1((word)dummy);
+ return(arg);
+}
+#endif
+
+/* Clear some of the inaccessible part of the stack. Returns its */
+/* argument, so it can be used in a tail call position, hence clearing */
+/* another frame. */
+ptr_t GC_clear_stack(arg)
+ptr_t arg;
+{
+ register word sp = (word)GC_approx_sp(); /* Hotter than actual sp */
+# ifdef THREADS
+ word dummy[SMALL_CLEAR_SIZE];
+ static unsigned random_no = 0;
+ /* Should be more random than it is ... */
+ /* Used to occasionally clear a bigger */
+ /* chunk. */
+# endif
+ register word limit;
+
+# define SLOP 400
+ /* Extra bytes we clear every time. This clears our own */
+ /* activation record, and should cause more frequent */
+ /* clearing near the cold end of the stack, a good thing. */
+# define GC_SLOP 4000
+ /* We make GC_high_water this much hotter than we really saw */
+ /* saw it, to cover for GC noise etc. above our current frame. */
+# define CLEAR_THRESHOLD 100000
+ /* We restart the clearing process after this many bytes of */
+ /* allocation. Otherwise very heavily recursive programs */
+ /* with sparse stacks may result in heaps that grow almost */
+ /* without bounds. As the heap gets larger, collection */
+ /* frequency decreases, thus clearing frequency would decrease, */
+ /* thus more junk remains accessible, thus the heap gets */
+ /* larger ... */
+# ifdef THREADS
+ if (++random_no % 13 == 0) {
+ limit = sp;
+ MAKE_HOTTER(limit, BIG_CLEAR_SIZE*sizeof(word));
+ limit &= ~0xf; /* Make it sufficiently aligned for assembly */
+ /* implementations of GC_clear_stack_inner. */
+ return GC_clear_stack_inner(arg, limit);
+ } else {
+ BZERO(dummy, SMALL_CLEAR_SIZE*sizeof(word));
+ return arg;
+ }
+# else
+ if (GC_gc_no > GC_stack_last_cleared) {
+ /* Start things over, so we clear the entire stack again */
+ if (GC_stack_last_cleared == 0) GC_high_water = (word) GC_stackbottom;
+ GC_min_sp = GC_high_water;
+ GC_stack_last_cleared = GC_gc_no;
+ GC_words_allocd_at_reset = GC_words_allocd;
+ }
+ /* Adjust GC_high_water */
+ MAKE_COOLER(GC_high_water, WORDS_TO_BYTES(DEGRADE_RATE) + GC_SLOP);
+ if (sp HOTTER_THAN GC_high_water) {
+ GC_high_water = sp;
+ }
+ MAKE_HOTTER(GC_high_water, GC_SLOP);
+ limit = GC_min_sp;
+ MAKE_HOTTER(limit, SLOP);
+ if (sp COOLER_THAN limit) {
+ limit &= ~0xf; /* Make it sufficiently aligned for assembly */
+ /* implementations of GC_clear_stack_inner. */
+ GC_min_sp = sp;
+ return(GC_clear_stack_inner(arg, limit));
+ } else if (WORDS_TO_BYTES(GC_words_allocd - GC_words_allocd_at_reset)
+ > CLEAR_THRESHOLD) {
+ /* Restart clearing process, but limit how much clearing we do. */
+ GC_min_sp = sp;
+ MAKE_HOTTER(GC_min_sp, CLEAR_THRESHOLD/4);
+ if (GC_min_sp HOTTER_THAN GC_high_water) GC_min_sp = GC_high_water;
+ GC_words_allocd_at_reset = GC_words_allocd;
+ }
+ return(arg);
+# endif
+}
+
+
+/* Return a pointer to the base address of p, given a pointer to a */
+/* an address within an object. Return 0 o.w. */
+# ifdef __STDC__
+ GC_PTR GC_base(GC_PTR p)
+# else
+ GC_PTR GC_base(p)
+ GC_PTR p;
+# endif
+{
+ register word r;
+ register struct hblk *h;
+ register bottom_index *bi;
+ register hdr *candidate_hdr;
+ register word limit;
+
+ r = (word)p;
+ if (!GC_is_initialized) return 0;
+ h = HBLKPTR(r);
+ GET_BI(r, bi);
+ candidate_hdr = HDR_FROM_BI(bi, r);
+ if (candidate_hdr == 0) return(0);
+ /* If it's a pointer to the middle of a large object, move it */
+ /* to the beginning. */
+ while (IS_FORWARDING_ADDR_OR_NIL(candidate_hdr)) {
+ h = FORWARDED_ADDR(h,candidate_hdr);
+ r = (word)h;
+ candidate_hdr = HDR(h);
+ }
+ if (candidate_hdr -> hb_map == GC_invalid_map) return(0);
+ /* Make sure r points to the beginning of the object */
+ r &= ~(WORDS_TO_BYTES(1) - 1);
+ {
+ register int offset = HBLKDISPL(r);
+ register signed_word sz = candidate_hdr -> hb_sz;
+ register signed_word map_entry;
+
+ map_entry = MAP_ENTRY((candidate_hdr -> hb_map), offset);
+ if (map_entry > CPP_MAX_OFFSET) {
+ map_entry = (signed_word)(BYTES_TO_WORDS(offset)) % sz;
+ }
+ r -= WORDS_TO_BYTES(map_entry);
+ limit = r + WORDS_TO_BYTES(sz);
+ if (limit > (word)(h + 1)
+ && sz <= BYTES_TO_WORDS(HBLKSIZE)) {
+ return(0);
+ }
+ if ((word)p >= limit) return(0);
+ }
+ return((GC_PTR)r);
+}
+
+
+/* Return the size of an object, given a pointer to its base. */
+/* (For small obects this also happens to work from interior pointers, */
+/* but that shouldn't be relied upon.) */
+# ifdef __STDC__
+ size_t GC_size(GC_PTR p)
+# else
+ size_t GC_size(p)
+ GC_PTR p;
+# endif
+{
+ register int sz;
+ register hdr * hhdr = HDR(p);
+
+ sz = WORDS_TO_BYTES(hhdr -> hb_sz);
+ return(sz);
+}
+
+size_t GC_get_heap_size GC_PROTO(())
+{
+ return ((size_t) GC_heapsize);
+}
+
+size_t GC_get_free_bytes GC_PROTO(())
+{
+ return ((size_t) GC_large_free_bytes);
+}
+
+size_t GC_get_bytes_since_gc GC_PROTO(())
+{
+ return ((size_t) WORDS_TO_BYTES(GC_words_allocd));
+}
+
+size_t GC_get_total_bytes GC_PROTO(())
+{
+ return ((size_t) WORDS_TO_BYTES(GC_words_allocd+GC_words_allocd_before_gc));
+}
+
+GC_bool GC_is_initialized = FALSE;
+
+void GC_init()
+{
+ DCL_LOCK_STATE;
+
+ DISABLE_SIGNALS();
+
+#if defined(GC_WIN32_THREADS) && !defined(GC_PTHREADS)
+ if (!GC_is_initialized) {
+ BOOL (WINAPI *pfn) (LPCRITICAL_SECTION, DWORD) = NULL;
+ HMODULE hK32 = GetModuleHandle("kernel32.dll");
+ if (hK32)
+ pfn = (BOOL (WINAPI *) (LPCRITICAL_SECTION, DWORD))
+ GetProcAddress (hK32,
+ "InitializeCriticalSectionAndSpinCount");
+ if (pfn)
+ pfn(&GC_allocate_ml, 4000);
+ else
+ InitializeCriticalSection (&GC_allocate_ml);
+ }
+#endif /* MSWIN32 */
+
+ LOCK();
+ GC_init_inner();
+ UNLOCK();
+ ENABLE_SIGNALS();
+
+# if defined(PARALLEL_MARK) || defined(THREAD_LOCAL_ALLOC)
+ /* Make sure marker threads and started and thread local */
+ /* allocation is initialized, in case we didn't get */
+ /* called from GC_init_parallel(); */
+ {
+ extern void GC_init_parallel(void);
+ GC_init_parallel();
+ }
+# endif /* PARALLEL_MARK || THREAD_LOCAL_ALLOC */
+
+# if defined(DYNAMIC_LOADING) && defined(DARWIN)
+ {
+ /* This must be called WITHOUT the allocation lock held
+ and before any threads are created */
+ extern void GC_init_dyld();
+ GC_init_dyld();
+ }
+# endif
+}
+
+#if defined(MSWIN32) || defined(MSWINCE)
+ CRITICAL_SECTION GC_write_cs;
+#endif
+
+#ifdef MSWIN32
+ extern void GC_init_win32 GC_PROTO((void));
+#endif
+
+extern void GC_setpagesize();
+
+
+#ifdef MSWIN32
+extern GC_bool GC_no_win32_dlls;
+#else
+# define GC_no_win32_dlls FALSE
+#endif
+
+void GC_exit_check GC_PROTO((void))
+{
+ GC_gcollect();
+}
+
+#ifdef SEARCH_FOR_DATA_START
+ extern void GC_init_linux_data_start GC_PROTO((void));
+#endif
+
+#ifdef UNIX_LIKE
+
+extern void GC_set_and_save_fault_handler GC_PROTO((void (*handler)(int)));
+
+static void looping_handler(sig)
+int sig;
+{
+ GC_err_printf1("Caught signal %d: looping in handler\n", sig);
+ for(;;);
+}
+
+static GC_bool installed_looping_handler = FALSE;
+
+static void maybe_install_looping_handler()
+{
+ /* Install looping handler before the write fault handler, so we */
+ /* handle write faults correctly. */
+ if (!installed_looping_handler && 0 != GETENV("GC_LOOP_ON_ABORT")) {
+ GC_set_and_save_fault_handler(looping_handler);
+ installed_looping_handler = TRUE;
+ }
+}
+
+#else /* !UNIX_LIKE */
+
+# define maybe_install_looping_handler()
+
+#endif
+
+void GC_init_inner()
+{
+# if !defined(THREADS) && defined(GC_ASSERTIONS)
+ word dummy;
+# endif
+ word initial_heap_sz = (word)MINHINCR;
+
+ if (GC_is_initialized) return;
+# ifdef PRINTSTATS
+ GC_print_stats = 1;
+# endif
+# if defined(MSWIN32) || defined(MSWINCE)
+ InitializeCriticalSection(&GC_write_cs);
+# endif
+ if (0 != GETENV("GC_PRINT_STATS")) {
+ GC_print_stats = 1;
+ }
+# ifndef NO_DEBUGGING
+ if (0 != GETENV("GC_DUMP_REGULARLY")) {
+ GC_dump_regularly = 1;
+ }
+# endif
+# ifdef KEEP_BACK_PTRS
+ {
+ char * backtraces_string = GETENV("GC_BACKTRACES");
+ if (0 != backtraces_string) {
+ GC_backtraces = atol(backtraces_string);
+ if (backtraces_string[0] == '\0') GC_backtraces = 1;
+ }
+ }
+# endif
+ if (0 != GETENV("GC_FIND_LEAK")) {
+ GC_find_leak = 1;
+# ifdef __STDC__
+ atexit(GC_exit_check);
+# endif
+ }
+ if (0 != GETENV("GC_ALL_INTERIOR_POINTERS")) {
+ GC_all_interior_pointers = 1;
+ }
+ if (0 != GETENV("GC_DONT_GC")) {
+ GC_dont_gc = 1;
+ }
+ if (0 != GETENV("GC_PRINT_BACK_HEIGHT")) {
+ GC_print_back_height = 1;
+ }
+ if (0 != GETENV("GC_NO_BLACKLIST_WARNING")) {
+ GC_large_alloc_warn_interval = LONG_MAX;
+ }
+ {
+ char * time_limit_string = GETENV("GC_PAUSE_TIME_TARGET");
+ if (0 != time_limit_string) {
+ long time_limit = atol(time_limit_string);
+ if (time_limit < 5) {
+ WARN("GC_PAUSE_TIME_TARGET environment variable value too small "
+ "or bad syntax: Ignoring\n", 0);
+ } else {
+ GC_time_limit = time_limit;
+ }
+ }
+ }
+ {
+ char * interval_string = GETENV("GC_LARGE_ALLOC_WARN_INTERVAL");
+ if (0 != interval_string) {
+ long interval = atol(interval_string);
+ if (interval <= 0) {
+ WARN("GC_LARGE_ALLOC_WARN_INTERVAL environment variable has "
+ "bad value: Ignoring\n", 0);
+ } else {
+ GC_large_alloc_warn_interval = interval;
+ }
+ }
+ }
+ maybe_install_looping_handler();
+ /* Adjust normal object descriptor for extra allocation. */
+ if (ALIGNMENT > GC_DS_TAGS && EXTRA_BYTES != 0) {
+ GC_obj_kinds[NORMAL].ok_descriptor = ((word)(-ALIGNMENT) | GC_DS_LENGTH);
+ }
+ GC_setpagesize();
+ GC_exclude_static_roots(beginGC_arrays, endGC_arrays);
+ GC_exclude_static_roots(beginGC_obj_kinds, endGC_obj_kinds);
+# ifdef SEPARATE_GLOBALS
+ GC_exclude_static_roots(beginGC_objfreelist, endGC_objfreelist);
+ GC_exclude_static_roots(beginGC_aobjfreelist, endGC_aobjfreelist);
+# endif
+# ifdef MSWIN32
+ GC_init_win32();
+# endif
+# if defined(SEARCH_FOR_DATA_START)
+ GC_init_linux_data_start();
+# endif
+# if (defined(NETBSD) || defined(OPENBSD)) && defined(__ELF__)
+ GC_init_netbsd_elf();
+# endif
+# if defined(GC_PTHREADS) || defined(GC_SOLARIS_THREADS) \
+ || defined(GC_WIN32_THREADS)
+ GC_thr_init();
+# endif
+# ifdef GC_SOLARIS_THREADS
+ /* We need dirty bits in order to find live stack sections. */
+ GC_dirty_init();
+# endif
+# if !defined(THREADS) || defined(GC_PTHREADS) || defined(GC_WIN32_THREADS) \
+ || defined(GC_SOLARIS_THREADS)
+ if (GC_stackbottom == 0) {
+ # if defined(GC_PTHREADS) && ! defined(GC_SOLARIS_THREADS)
+ /* Use thread_stack_base if available, as GC could be initialized from
+ a thread that is not the "main" thread. */
+ GC_stackbottom = GC_get_thread_stack_base();
+ # endif
+ if (GC_stackbottom == 0)
+ GC_stackbottom = GC_get_stack_base();
+# if (defined(LINUX) || defined(HPUX)) && defined(IA64)
+ GC_register_stackbottom = GC_get_register_stack_base();
+# endif
+ } else {
+# if (defined(LINUX) || defined(HPUX)) && defined(IA64)
+ if (GC_register_stackbottom == 0) {
+ WARN("GC_register_stackbottom should be set with GC_stackbottom", 0);
+ /* The following may fail, since we may rely on */
+ /* alignment properties that may not hold with a user set */
+ /* GC_stackbottom. */
+ GC_register_stackbottom = GC_get_register_stack_base();
+ }
+# endif
+ }
+# endif
+ GC_STATIC_ASSERT(sizeof (ptr_t) == sizeof(word));
+ GC_STATIC_ASSERT(sizeof (signed_word) == sizeof(word));
+ GC_STATIC_ASSERT(sizeof (struct hblk) == HBLKSIZE);
+# ifndef THREADS
+# if defined(STACK_GROWS_UP) && defined(STACK_GROWS_DOWN)
+ ABORT(
+ "Only one of STACK_GROWS_UP and STACK_GROWS_DOWN should be defd\n");
+# endif
+# if !defined(STACK_GROWS_UP) && !defined(STACK_GROWS_DOWN)
+ ABORT(
+ "One of STACK_GROWS_UP and STACK_GROWS_DOWN should be defd\n");
+# endif
+# ifdef STACK_GROWS_DOWN
+ GC_ASSERT((word)(&dummy) <= (word)GC_stackbottom);
+# else
+ GC_ASSERT((word)(&dummy) >= (word)GC_stackbottom);
+# endif
+# endif
+# if !defined(_AUX_SOURCE) || defined(__GNUC__)
+ GC_ASSERT((word)(-1) > (word)0);
+ /* word should be unsigned */
+# endif
+ GC_ASSERT((signed_word)(-1) < (signed_word)0);
+
+ /* Add initial guess of root sets. Do this first, since sbrk(0) */
+ /* might be used. */
+ if (GC_REGISTER_MAIN_STATIC_DATA()) GC_register_data_segments();
+ GC_init_headers();
+ GC_bl_init();
+ GC_mark_init();
+ {
+ char * sz_str = GETENV("GC_INITIAL_HEAP_SIZE");
+ if (sz_str != NULL) {
+ initial_heap_sz = atoi(sz_str);
+ if (initial_heap_sz <= MINHINCR * HBLKSIZE) {
+ WARN("Bad initial heap size %s - ignoring it.\n",
+ sz_str);
+ }
+ initial_heap_sz = divHBLKSZ(initial_heap_sz);
+ }
+ }
+ {
+ char * sz_str = GETENV("GC_MAXIMUM_HEAP_SIZE");
+ if (sz_str != NULL) {
+ word max_heap_sz = (word)atol(sz_str);
+ if (max_heap_sz < initial_heap_sz * HBLKSIZE) {
+ WARN("Bad maximum heap size %s - ignoring it.\n",
+ sz_str);
+ }
+ if (0 == GC_max_retries) GC_max_retries = 2;
+ GC_set_max_heap_size(max_heap_sz);
+ }
+ }
+ if (!GC_expand_hp_inner(initial_heap_sz)) {
+ GC_err_printf0("Can't start up: not enough memory\n");
+ EXIT();
+ }
+ /* Preallocate large object map. It's otherwise inconvenient to */
+ /* deal with failure. */
+ if (!GC_add_map_entry((word)0)) {
+ GC_err_printf0("Can't start up: not enough memory\n");
+ EXIT();
+ }
+ GC_register_displacement_inner(0L);
+# ifdef MERGE_SIZES
+ GC_init_size_map();
+# endif
+# ifdef PCR
+ if (PCR_IL_Lock(PCR_Bool_false, PCR_allSigsBlocked, PCR_waitForever)
+ != PCR_ERes_okay) {
+ ABORT("Can't lock load state\n");
+ } else if (PCR_IL_Unlock() != PCR_ERes_okay) {
+ ABORT("Can't unlock load state\n");
+ }
+ PCR_IL_Unlock();
+ GC_pcr_install();
+# endif
+# if !defined(SMALL_CONFIG)
+ if (!GC_no_win32_dlls && 0 != GETENV("GC_ENABLE_INCREMENTAL")) {
+ GC_ASSERT(!GC_incremental);
+ GC_setpagesize();
+# ifndef GC_SOLARIS_THREADS
+ GC_dirty_init();
+# endif
+ GC_ASSERT(GC_words_allocd == 0)
+ GC_incremental = TRUE;
+ }
+# endif /* !SMALL_CONFIG */
+ COND_DUMP;
+ /* Get black list set up and/or incremental GC started */
+ if (!GC_dont_precollect || GC_incremental) GC_gcollect_inner();
+ GC_is_initialized = TRUE;
+# ifdef STUBBORN_ALLOC
+ GC_stubborn_init();
+# endif
+ /* Convince lint that some things are used */
+# ifdef LINT
+ {
+ extern char * GC_copyright[];
+ extern int GC_read();
+ extern void GC_register_finalizer_no_order();
+
+ GC_noop(GC_copyright, GC_find_header,
+ GC_push_one, GC_call_with_alloc_lock, GC_read,
+ GC_dont_expand,
+# ifndef NO_DEBUGGING
+ GC_dump,
+# endif
+ GC_register_finalizer_no_order);
+ }
+# endif
+}
+
+void GC_enable_incremental GC_PROTO(())
+{
+# if !defined(SMALL_CONFIG) && !defined(KEEP_BACK_PTRS)
+ /* If we are keeping back pointers, the GC itself dirties all */
+ /* pages on which objects have been marked, making */
+ /* incremental GC pointless. */
+ if (!GC_find_leak) {
+ DCL_LOCK_STATE;
+
+ DISABLE_SIGNALS();
+ LOCK();
+ if (GC_incremental) goto out;
+ GC_setpagesize();
+ if (GC_no_win32_dlls) goto out;
+# ifndef GC_SOLARIS_THREADS
+ maybe_install_looping_handler(); /* Before write fault handler! */
+ GC_dirty_init();
+# endif
+ if (!GC_is_initialized) {
+ GC_init_inner();
+ }
+ if (GC_incremental) goto out;
+ if (GC_dont_gc) {
+ /* Can't easily do it. */
+ UNLOCK();
+ ENABLE_SIGNALS();
+ return;
+ }
+ if (GC_words_allocd > 0) {
+ /* There may be unmarked reachable objects */
+ GC_gcollect_inner();
+ } /* else we're OK in assuming everything's */
+ /* clean since nothing can point to an */
+ /* unmarked object. */
+ GC_read_dirty();
+ GC_incremental = TRUE;
+out:
+ UNLOCK();
+ ENABLE_SIGNALS();
+ }
+# endif
+}
+
+
+#if defined(MSWIN32) || defined(MSWINCE)
+# define LOG_FILE _T("gc.log")
+
+ HANDLE GC_stdout = 0;
+
+ void GC_deinit()
+ {
+ if (GC_is_initialized) {
+ DeleteCriticalSection(&GC_write_cs);
+ }
+ }
+
+ int GC_write(buf, len)
+ GC_CONST char * buf;
+ size_t len;
+ {
+ BOOL tmp;
+ DWORD written;
+ if (len == 0)
+ return 0;
+ EnterCriticalSection(&GC_write_cs);
+ if (GC_stdout == INVALID_HANDLE_VALUE) {
+ return -1;
+ } else if (GC_stdout == 0) {
+ GC_stdout = CreateFile(LOG_FILE, GENERIC_WRITE,
+ FILE_SHARE_READ | FILE_SHARE_WRITE,
+ NULL, CREATE_ALWAYS, FILE_FLAG_WRITE_THROUGH,
+ NULL);
+ if (GC_stdout == INVALID_HANDLE_VALUE) ABORT("Open of log file failed");
+ }
+ tmp = WriteFile(GC_stdout, buf, len, &written, NULL);
+ if (!tmp)
+ DebugBreak();
+ LeaveCriticalSection(&GC_write_cs);
+ return tmp ? (int)written : -1;
+ }
+
+#endif
+
+#if defined(OS2) || defined(MACOS)
+FILE * GC_stdout = NULL;
+FILE * GC_stderr = NULL;
+int GC_tmp; /* Should really be local ... */
+
+ void GC_set_files()
+ {
+ if (GC_stdout == NULL) {
+ GC_stdout = stdout;
+ }
+ if (GC_stderr == NULL) {
+ GC_stderr = stderr;
+ }
+ }
+#endif
+
+#if !defined(OS2) && !defined(MACOS) && !defined(MSWIN32) && !defined(MSWINCE)
+ int GC_stdout = 1;
+ int GC_stderr = 2;
+# if !defined(AMIGA)
+# include <unistd.h>
+# endif
+#endif
+
+#if !defined(MSWIN32) && !defined(MSWINCE) && !defined(OS2) \
+ && !defined(MACOS) && !defined(ECOS) && !defined(NOSYS)
+int GC_write(fd, buf, len)
+int fd;
+GC_CONST char *buf;
+size_t len;
+{
+ register int bytes_written = 0;
+ register int result;
+
+ while (bytes_written < len) {
+# ifdef GC_SOLARIS_THREADS
+ result = syscall(SYS_write, fd, buf + bytes_written,
+ len - bytes_written);
+# else
+ result = write(fd, buf + bytes_written, len - bytes_written);
+# endif
+ if (-1 == result) return(result);
+ bytes_written += result;
+ }
+ return(bytes_written);
+}
+#endif /* UN*X */
+
+#ifdef ECOS
+int GC_write(fd, buf, len)
+{
+ _Jv_diag_write (buf, len);
+ return len;
+}
+#endif
+
+#ifdef NOSYS
+int GC_write(fd, buf, len)
+{
+ /* No writing. */
+ return len;
+}
+#endif
+
+
+#if defined(MSWIN32) || defined(MSWINCE)
+# define WRITE(f, buf, len) GC_write(buf, len)
+#else
+# if defined(OS2) || defined(MACOS)
+# define WRITE(f, buf, len) (GC_set_files(), \
+ GC_tmp = fwrite((buf), 1, (len), (f)), \
+ fflush(f), GC_tmp)
+# else
+# define WRITE(f, buf, len) GC_write((f), (buf), (len))
+# endif
+#endif
+
+/* A version of printf that is unlikely to call malloc, and is thus safer */
+/* to call from the collector in case malloc has been bound to GC_malloc. */
+/* Assumes that no more than 1023 characters are written at once. */
+/* Assumes that all arguments have been converted to something of the */
+/* same size as long, and that the format conversions expect something */
+/* of that size. */
+void GC_printf(format, a, b, c, d, e, f)
+GC_CONST char * format;
+long a, b, c, d, e, f;
+{
+ char buf[1025];
+
+ if (GC_quiet) return;
+ buf[1024] = 0x15;
+ (void) sprintf(buf, format, a, b, c, d, e, f);
+ if (buf[1024] != 0x15) ABORT("GC_printf clobbered stack");
+ if (WRITE(GC_stdout, buf, strlen(buf)) < 0) ABORT("write to stdout failed");
+}
+
+void GC_err_printf(format, a, b, c, d, e, f)
+GC_CONST char * format;
+long a, b, c, d, e, f;
+{
+ char buf[1025];
+
+ buf[1024] = 0x15;
+ (void) sprintf(buf, format, a, b, c, d, e, f);
+ if (buf[1024] != 0x15) ABORT("GC_err_printf clobbered stack");
+ if (WRITE(GC_stderr, buf, strlen(buf)) < 0) ABORT("write to stderr failed");
+}
+
+void GC_err_puts(s)
+GC_CONST char *s;
+{
+ if (WRITE(GC_stderr, s, strlen(s)) < 0) ABORT("write to stderr failed");
+}
+
+#if defined(LINUX) && !defined(SMALL_CONFIG)
+void GC_err_write(buf, len)
+GC_CONST char *buf;
+size_t len;
+{
+ if (WRITE(GC_stderr, buf, len) < 0) ABORT("write to stderr failed");
+}
+#endif
+
+# if defined(__STDC__) || defined(__cplusplus)
+ void GC_default_warn_proc(char *msg, GC_word arg)
+# else
+ void GC_default_warn_proc(msg, arg)
+ char *msg;
+ GC_word arg;
+# endif
+{
+ GC_err_printf1(msg, (unsigned long)arg);
+}
+
+GC_warn_proc GC_current_warn_proc = GC_default_warn_proc;
+
+# if defined(__STDC__) || defined(__cplusplus)
+ GC_warn_proc GC_set_warn_proc(GC_warn_proc p)
+# else
+ GC_warn_proc GC_set_warn_proc(p)
+ GC_warn_proc p;
+# endif
+{
+ GC_warn_proc result;
+
+# ifdef GC_WIN32_THREADS
+ GC_ASSERT(GC_is_initialized);
+# endif
+ LOCK();
+ result = GC_current_warn_proc;
+ GC_current_warn_proc = p;
+ UNLOCK();
+ return(result);
+}
+
+# if defined(__STDC__) || defined(__cplusplus)
+ GC_word GC_set_free_space_divisor (GC_word value)
+# else
+ GC_word GC_set_free_space_divisor (value)
+ GC_word value;
+# endif
+{
+ GC_word old = GC_free_space_divisor;
+ GC_free_space_divisor = value;
+ return old;
+}
+
+#ifndef PCR
+void GC_abort(msg)
+GC_CONST char * msg;
+{
+# if defined(MSWIN32)
+ (void) MessageBoxA(NULL, msg, "Fatal error in gc", MB_ICONERROR|MB_OK);
+# else
+ GC_err_printf1("%s\n", msg);
+# endif
+ if (GETENV("GC_LOOP_ON_ABORT") != NULL) {
+ /* In many cases it's easier to debug a running process. */
+ /* It's arguably nicer to sleep, but that makes it harder */
+ /* to look at the thread if the debugger doesn't know much */
+ /* about threads. */
+ for(;;) {}
+ }
+# if defined(MSWIN32) || defined(MSWINCE)
+ DebugBreak();
+# else
+ (void) abort();
+# endif
+}
+#endif
+
+void GC_enable()
+{
+ LOCK();
+ GC_dont_gc--;
+ UNLOCK();
+}
+
+void GC_disable()
+{
+ LOCK();
+ GC_dont_gc++;
+ UNLOCK();
+}
+
+/* Helper procedures for new kind creation. */
+void ** GC_new_free_list_inner()
+{
+ void *result = GC_INTERNAL_MALLOC((MAXOBJSZ+1)*sizeof(ptr_t), PTRFREE);
+ if (result == 0) ABORT("Failed to allocate freelist for new kind");
+ BZERO(result, (MAXOBJSZ+1)*sizeof(ptr_t));
+ return result;
+}
+
+void ** GC_new_free_list()
+{
+ void *result;
+ LOCK(); DISABLE_SIGNALS();
+ result = GC_new_free_list_inner();
+ UNLOCK(); ENABLE_SIGNALS();
+ return result;
+}
+
+int GC_new_kind_inner(fl, descr, adjust, clear)
+void **fl;
+GC_word descr;
+int adjust;
+int clear;
+{
+ int result = GC_n_kinds++;
+
+ if (GC_n_kinds > MAXOBJKINDS) ABORT("Too many kinds");
+ GC_obj_kinds[result].ok_freelist = (ptr_t *)fl;
+ GC_obj_kinds[result].ok_reclaim_list = 0;
+ GC_obj_kinds[result].ok_descriptor = descr;
+ GC_obj_kinds[result].ok_relocate_descr = adjust;
+ GC_obj_kinds[result].ok_init = clear;
+ return result;
+}
+
+int GC_new_kind(fl, descr, adjust, clear)
+void **fl;
+GC_word descr;
+int adjust;
+int clear;
+{
+ int result;
+ LOCK(); DISABLE_SIGNALS();
+ result = GC_new_kind_inner(fl, descr, adjust, clear);
+ UNLOCK(); ENABLE_SIGNALS();
+ return result;
+}
+
+int GC_new_proc_inner(proc)
+GC_mark_proc proc;
+{
+ int result = GC_n_mark_procs++;
+
+ if (GC_n_mark_procs > MAX_MARK_PROCS) ABORT("Too many mark procedures");
+ GC_mark_procs[result] = proc;
+ return result;
+}
+
+int GC_new_proc(proc)
+GC_mark_proc proc;
+{
+ int result;
+ LOCK(); DISABLE_SIGNALS();
+ result = GC_new_proc_inner(proc);
+ UNLOCK(); ENABLE_SIGNALS();
+ return result;
+}
+
+
+#if !defined(NO_DEBUGGING)
+
+void GC_dump()
+{
+ GC_printf0("***Static roots:\n");
+ GC_print_static_roots();
+ GC_printf0("\n***Heap sections:\n");
+ GC_print_heap_sects();
+ GC_printf0("\n***Free blocks:\n");
+ GC_print_hblkfreelist();
+ GC_printf0("\n***Blocks in use:\n");
+ GC_print_block_list();
+ GC_printf0("\n***Finalization statistics:\n");
+ GC_print_finalization_stats();
+}
+
+#endif /* NO_DEBUGGING */
Added: llvm-gcc-4.2/trunk/boehm-gc/new_hblk.c
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/new_hblk.c?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/new_hblk.c (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/new_hblk.c Thu Nov 8 16:56:19 2007
@@ -0,0 +1,263 @@
+/*
+ * Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
+ * Copyright (c) 1991-1994 by Xerox Corporation. All rights reserved.
+ * Copyright (c) 2000 by Hewlett-Packard Company. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ *
+ * This file contains the functions:
+ * ptr_t GC_build_flXXX(h, old_fl)
+ * void GC_new_hblk(n)
+ */
+/* Boehm, May 19, 1994 2:09 pm PDT */
+
+
+# include <stdio.h>
+# include "private/gc_priv.h"
+
+#ifndef SMALL_CONFIG
+/*
+ * Build a free list for size 1 objects inside hblk h. Set the last link to
+ * be ofl. Return a pointer tpo the first free list entry.
+ */
+ptr_t GC_build_fl1(h, ofl)
+struct hblk *h;
+ptr_t ofl;
+{
+ register word * p = h -> hb_body;
+ register word * lim = (word *)(h + 1);
+
+ p[0] = (word)ofl;
+ p[1] = (word)(p);
+ p[2] = (word)(p+1);
+ p[3] = (word)(p+2);
+ p += 4;
+ for (; p < lim; p += 4) {
+ p[0] = (word)(p-1);
+ p[1] = (word)(p);
+ p[2] = (word)(p+1);
+ p[3] = (word)(p+2);
+ };
+ return((ptr_t)(p-1));
+}
+
+/* The same for size 2 cleared objects */
+ptr_t GC_build_fl_clear2(h, ofl)
+struct hblk *h;
+ptr_t ofl;
+{
+ register word * p = h -> hb_body;
+ register word * lim = (word *)(h + 1);
+
+ p[0] = (word)ofl;
+ p[1] = 0;
+ p[2] = (word)p;
+ p[3] = 0;
+ p += 4;
+ for (; p < lim; p += 4) {
+ p[0] = (word)(p-2);
+ p[1] = 0;
+ p[2] = (word)p;
+ p[3] = 0;
+ };
+ return((ptr_t)(p-2));
+}
+
+/* The same for size 3 cleared objects */
+ptr_t GC_build_fl_clear3(h, ofl)
+struct hblk *h;
+ptr_t ofl;
+{
+ register word * p = h -> hb_body;
+ register word * lim = (word *)(h + 1) - 2;
+
+ p[0] = (word)ofl;
+ p[1] = 0;
+ p[2] = 0;
+ p += 3;
+ for (; p < lim; p += 3) {
+ p[0] = (word)(p-3);
+ p[1] = 0;
+ p[2] = 0;
+ };
+ return((ptr_t)(p-3));
+}
+
+/* The same for size 4 cleared objects */
+ptr_t GC_build_fl_clear4(h, ofl)
+struct hblk *h;
+ptr_t ofl;
+{
+ register word * p = h -> hb_body;
+ register word * lim = (word *)(h + 1);
+
+ p[0] = (word)ofl;
+ p[1] = 0;
+ p[2] = 0;
+ p[3] = 0;
+ p += 4;
+ for (; p < lim; p += 4) {
+ PREFETCH_FOR_WRITE((ptr_t)(p+64));
+ p[0] = (word)(p-4);
+ p[1] = 0;
+ CLEAR_DOUBLE(p+2);
+ };
+ return((ptr_t)(p-4));
+}
+
+/* The same for size 2 uncleared objects */
+ptr_t GC_build_fl2(h, ofl)
+struct hblk *h;
+ptr_t ofl;
+{
+ register word * p = h -> hb_body;
+ register word * lim = (word *)(h + 1);
+
+ p[0] = (word)ofl;
+ p[2] = (word)p;
+ p += 4;
+ for (; p < lim; p += 4) {
+ p[0] = (word)(p-2);
+ p[2] = (word)p;
+ };
+ return((ptr_t)(p-2));
+}
+
+/* The same for size 4 uncleared objects */
+ptr_t GC_build_fl4(h, ofl)
+struct hblk *h;
+ptr_t ofl;
+{
+ register word * p = h -> hb_body;
+ register word * lim = (word *)(h + 1);
+
+ p[0] = (word)ofl;
+ p[4] = (word)p;
+ p += 8;
+ for (; p < lim; p += 8) {
+ PREFETCH_FOR_WRITE((ptr_t)(p+64));
+ p[0] = (word)(p-4);
+ p[4] = (word)p;
+ };
+ return((ptr_t)(p-4));
+}
+
+#endif /* !SMALL_CONFIG */
+
+
+/* Build a free list for objects of size sz inside heap block h. */
+/* Clear objects inside h if clear is set. Add list to the end of */
+/* the free list we build. Return the new free list. */
+/* This could be called without the main GC lock, if we ensure that */
+/* there is no concurrent collection which might reclaim objects that */
+/* we have not yet allocated. */
+ptr_t GC_build_fl(h, sz, clear, list)
+struct hblk *h;
+word sz;
+GC_bool clear;
+ptr_t list;
+{
+ word *p, *prev;
+ word *last_object; /* points to last object in new hblk */
+
+ /* Do a few prefetches here, just because its cheap. */
+ /* If we were more serious about it, these should go inside */
+ /* the loops. But write prefetches usually don't seem to */
+ /* matter much. */
+ PREFETCH_FOR_WRITE((ptr_t)h);
+ PREFETCH_FOR_WRITE((ptr_t)h + 128);
+ PREFETCH_FOR_WRITE((ptr_t)h + 256);
+ PREFETCH_FOR_WRITE((ptr_t)h + 378);
+ /* Handle small objects sizes more efficiently. For larger objects */
+ /* the difference is less significant. */
+# ifndef SMALL_CONFIG
+ switch (sz) {
+ case 1: return GC_build_fl1(h, list);
+ case 2: if (clear) {
+ return GC_build_fl_clear2(h, list);
+ } else {
+ return GC_build_fl2(h, list);
+ }
+ case 3: if (clear) {
+ return GC_build_fl_clear3(h, list);
+ } else {
+ /* It's messy to do better than the default here. */
+ break;
+ }
+ case 4: if (clear) {
+ return GC_build_fl_clear4(h, list);
+ } else {
+ return GC_build_fl4(h, list);
+ }
+ default:
+ break;
+ }
+# endif /* !SMALL_CONFIG */
+
+ /* Clear the page if necessary. */
+ if (clear) BZERO(h, HBLKSIZE);
+
+ /* Add objects to free list */
+ p = &(h -> hb_body[sz]); /* second object in *h */
+ prev = &(h -> hb_body[0]); /* One object behind p */
+ last_object = (word *)((char *)h + HBLKSIZE);
+ last_object -= sz;
+ /* Last place for last object to start */
+
+ /* make a list of all objects in *h with head as last object */
+ while (p <= last_object) {
+ /* current object's link points to last object */
+ obj_link(p) = (ptr_t)prev;
+ prev = p;
+ p += sz;
+ }
+ p -= sz; /* p now points to last object */
+
+ /*
+ * put p (which is now head of list of objects in *h) as first
+ * pointer in the appropriate free list for this size.
+ */
+ obj_link(h -> hb_body) = list;
+ return ((ptr_t)p);
+}
+
+/*
+ * Allocate a new heapblock for small objects of size n.
+ * Add all of the heapblock's objects to the free list for objects
+ * of that size.
+ * Set all mark bits if objects are uncollectable.
+ * Will fail to do anything if we are out of memory.
+ */
+void GC_new_hblk(sz, kind)
+register word sz;
+int kind;
+{
+ register struct hblk *h; /* the new heap block */
+ register GC_bool clear = GC_obj_kinds[kind].ok_init;
+
+# ifdef PRINTSTATS
+ if ((sizeof (struct hblk)) > HBLKSIZE) {
+ ABORT("HBLK SZ inconsistency");
+ }
+# endif
+ if (GC_debugging_started) clear = TRUE;
+
+ /* Allocate a new heap block */
+ h = GC_allochblk(sz, kind, 0);
+ if (h == 0) return;
+
+ /* Mark all objects if appropriate. */
+ if (IS_UNCOLLECTABLE(kind)) GC_set_hdr_marks(HDR(h));
+
+ /* Build the free list */
+ GC_obj_kinds[kind].ok_freelist[sz] =
+ GC_build_fl(h, sz, clear, GC_obj_kinds[kind].ok_freelist[sz]);
+}
+
Added: llvm-gcc-4.2/trunk/boehm-gc/obj_map.c
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/obj_map.c?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/obj_map.c (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/obj_map.c Thu Nov 8 16:56:19 2007
@@ -0,0 +1,147 @@
+/*
+ * Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
+ * Copyright (c) 1991, 1992 by Xerox Corporation. All rights reserved.
+ * Copyright (c) 1999-2001 by Hewlett-Packard Company. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ */
+
+/* Routines for maintaining maps describing heap block
+ * layouts for various object sizes. Allows fast pointer validity checks
+ * and fast location of object start locations on machines (such as SPARC)
+ * with slow division.
+ */
+
+# include "private/gc_priv.h"
+
+map_entry_type * GC_invalid_map = 0;
+
+/* Invalidate the object map associated with a block. Free blocks */
+/* are identified by invalid maps. */
+void GC_invalidate_map(hhdr)
+hdr *hhdr;
+{
+ register int displ;
+
+ if (GC_invalid_map == 0) {
+ GC_invalid_map = (map_entry_type *)GC_scratch_alloc(MAP_SIZE);
+ if (GC_invalid_map == 0) {
+ GC_err_printf0(
+ "Cant initialize GC_invalid_map: insufficient memory\n");
+ EXIT();
+ }
+ for (displ = 0; displ < HBLKSIZE; displ++) {
+ MAP_ENTRY(GC_invalid_map, displ) = OBJ_INVALID;
+ }
+ }
+ hhdr -> hb_map = GC_invalid_map;
+}
+
+/* Consider pointers that are offset bytes displaced from the beginning */
+/* of an object to be valid. */
+
+# if defined(__STDC__) || defined(__cplusplus)
+ void GC_register_displacement(GC_word offset)
+# else
+ void GC_register_displacement(offset)
+ GC_word offset;
+# endif
+{
+ DCL_LOCK_STATE;
+
+ DISABLE_SIGNALS();
+ LOCK();
+ GC_register_displacement_inner(offset);
+ UNLOCK();
+ ENABLE_SIGNALS();
+}
+
+void GC_register_displacement_inner(offset)
+word offset;
+{
+ register unsigned i;
+ word map_entry = BYTES_TO_WORDS(offset);
+
+ if (offset >= VALID_OFFSET_SZ) {
+ ABORT("Bad argument to GC_register_displacement");
+ }
+ if (map_entry > MAX_OFFSET) map_entry = OFFSET_TOO_BIG;
+ if (!GC_valid_offsets[offset]) {
+ GC_valid_offsets[offset] = TRUE;
+ GC_modws_valid_offsets[offset % sizeof(word)] = TRUE;
+ if (!GC_all_interior_pointers) {
+ for (i = 0; i <= MAXOBJSZ; i++) {
+ if (GC_obj_map[i] != 0) {
+ if (i == 0) {
+ GC_obj_map[i][offset] = (map_entry_type)map_entry;
+ } else {
+ register unsigned j;
+ register unsigned lb = WORDS_TO_BYTES(i);
+
+ if (offset < lb) {
+ for (j = offset; j < HBLKSIZE; j += lb) {
+ GC_obj_map[i][j] = (map_entry_type)map_entry;
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+}
+
+
+/* Add a heap block map for objects of size sz to obj_map. */
+/* Return FALSE on failure. */
+GC_bool GC_add_map_entry(sz)
+word sz;
+{
+ register unsigned obj_start;
+ register unsigned displ;
+ register map_entry_type * new_map;
+ word map_entry;
+
+ if (sz > MAXOBJSZ) sz = 0;
+ if (GC_obj_map[sz] != 0) {
+ return(TRUE);
+ }
+ new_map = (map_entry_type *)GC_scratch_alloc(MAP_SIZE);
+ if (new_map == 0) return(FALSE);
+# ifdef PRINTSTATS
+ GC_printf1("Adding block map for size %lu\n", (unsigned long)sz);
+# endif
+ for (displ = 0; displ < HBLKSIZE; displ++) {
+ MAP_ENTRY(new_map,displ) = OBJ_INVALID;
+ }
+ if (sz == 0) {
+ for(displ = 0; displ <= HBLKSIZE; displ++) {
+ if (OFFSET_VALID(displ)) {
+ map_entry = BYTES_TO_WORDS(displ);
+ if (map_entry > MAX_OFFSET) map_entry = OFFSET_TOO_BIG;
+ MAP_ENTRY(new_map,displ) = (map_entry_type)map_entry;
+ }
+ }
+ } else {
+ for (obj_start = 0;
+ obj_start + WORDS_TO_BYTES(sz) <= HBLKSIZE;
+ obj_start += WORDS_TO_BYTES(sz)) {
+ for (displ = 0; displ < WORDS_TO_BYTES(sz); displ++) {
+ if (OFFSET_VALID(displ)) {
+ map_entry = BYTES_TO_WORDS(displ);
+ if (map_entry > MAX_OFFSET) map_entry = OFFSET_TOO_BIG;
+ MAP_ENTRY(new_map, obj_start + displ) =
+ (map_entry_type)map_entry;
+ }
+ }
+ }
+ }
+ GC_obj_map[sz] = new_map;
+ return(TRUE);
+}
Added: llvm-gcc-4.2/trunk/boehm-gc/os_dep.c
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/os_dep.c?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/os_dep.c (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/os_dep.c Thu Nov 8 16:56:19 2007
@@ -0,0 +1,4280 @@
+/*
+ * Copyright 1988, 1989 Hans-J. Boehm, Alan J. Demers
+ * Copyright (c) 1991-1995 by Xerox Corporation. All rights reserved.
+ * Copyright (c) 1996-1999 by Silicon Graphics. All rights reserved.
+ * Copyright (c) 1999 by Hewlett-Packard Company. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ */
+
+# include "private/gc_priv.h"
+
+# if defined(LINUX) && !defined(POWERPC)
+# include <linux/version.h>
+# if (LINUX_VERSION_CODE <= 0x10400)
+ /* Ugly hack to get struct sigcontext_struct definition. Required */
+ /* for some early 1.3.X releases. Will hopefully go away soon. */
+ /* in some later Linux releases, asm/sigcontext.h may have to */
+ /* be included instead. */
+# define __KERNEL__
+# include <asm/signal.h>
+# undef __KERNEL__
+# else
+ /* Kernels prior to 2.1.1 defined struct sigcontext_struct instead of */
+ /* struct sigcontext. libc6 (glibc2) uses "struct sigcontext" in */
+ /* prototypes, so we have to include the top-level sigcontext.h to */
+ /* make sure the former gets defined to be the latter if appropriate. */
+# include <features.h>
+# if 2 <= __GLIBC__
+# if 2 == __GLIBC__ && 0 == __GLIBC_MINOR__
+ /* glibc 2.1 no longer has sigcontext.h. But signal.h */
+ /* has the right declaration for glibc 2.1. */
+# include <sigcontext.h>
+# endif /* 0 == __GLIBC_MINOR__ */
+# else /* not 2 <= __GLIBC__ */
+ /* libc5 doesn't have <sigcontext.h>: go directly with the kernel */
+ /* one. Check LINUX_VERSION_CODE to see which we should reference. */
+# include <asm/sigcontext.h>
+# endif /* 2 <= __GLIBC__ */
+# endif
+# endif
+# if !defined(OS2) && !defined(PCR) && !defined(AMIGA) && !defined(MACOS) \
+ && !defined(MSWINCE)
+# include <sys/types.h>
+# if !defined(MSWIN32) && !defined(SUNOS4)
+# include <unistd.h>
+# endif
+# endif
+
+# include <stdio.h>
+# if defined(MSWINCE)
+# define SIGSEGV 0 /* value is irrelevant */
+# else
+# include <signal.h>
+# endif
+
+#if defined(LINUX) || defined(LINUX_STACKBOTTOM)
+# include <ctype.h>
+#endif
+
+/* Blatantly OS dependent routines, except for those that are related */
+/* to dynamic loading. */
+
+# if defined(HEURISTIC2) || defined(SEARCH_FOR_DATA_START)
+# define NEED_FIND_LIMIT
+# endif
+
+# if !defined(STACKBOTTOM) && defined(HEURISTIC2)
+# define NEED_FIND_LIMIT
+# endif
+
+# if (defined(SUNOS4) && defined(DYNAMIC_LOADING)) && !defined(PCR)
+# define NEED_FIND_LIMIT
+# endif
+
+# if (defined(SVR4) || defined(AUX) || defined(DGUX) \
+ || (defined(LINUX) && defined(SPARC))) && !defined(PCR)
+# define NEED_FIND_LIMIT
+# endif
+
+#if defined(FREEBSD) && (defined(I386) || defined(X86_64) || defined(powerpc) || defined(__powerpc__))
+# include <machine/trap.h>
+# if !defined(PCR)
+# define NEED_FIND_LIMIT
+# endif
+#endif
+
+#if (defined(NETBSD) || defined(OPENBSD)) && defined(__ELF__) \
+ && !defined(NEED_FIND_LIMIT)
+ /* Used by GC_init_netbsd_elf() below. */
+# define NEED_FIND_LIMIT
+#endif
+
+#ifdef NEED_FIND_LIMIT
+# include <setjmp.h>
+#endif
+
+#ifdef AMIGA
+# define GC_AMIGA_DEF
+# include "AmigaOS.c"
+# undef GC_AMIGA_DEF
+#endif
+
+#if defined(MSWIN32) || defined(MSWINCE)
+# define WIN32_LEAN_AND_MEAN
+# define NOSERVICE
+# include <windows.h>
+#endif
+
+#ifdef MACOS
+# include <Processes.h>
+#endif
+
+#ifdef IRIX5
+# include <sys/uio.h>
+# include <malloc.h> /* for locking */
+#endif
+#if defined(USE_MMAP) || defined(USE_MUNMAP)
+# ifndef USE_MMAP
+ --> USE_MUNMAP requires USE_MMAP
+# endif
+# include <sys/types.h>
+# include <sys/mman.h>
+# include <sys/stat.h>
+# include <errno.h>
+#endif
+
+#ifdef UNIX_LIKE
+# include <fcntl.h>
+# if defined(SUNOS5SIGS) && !defined(FREEBSD)
+# include <sys/siginfo.h>
+# endif
+ /* Define SETJMP and friends to be the version that restores */
+ /* the signal mask. */
+# define SETJMP(env) sigsetjmp(env, 1)
+# define LONGJMP(env, val) siglongjmp(env, val)
+# define JMP_BUF sigjmp_buf
+#else
+# define SETJMP(env) setjmp(env)
+# define LONGJMP(env, val) longjmp(env, val)
+# define JMP_BUF jmp_buf
+#endif
+
+#ifdef DARWIN
+/* for get_etext and friends */
+#include <mach-o/getsect.h>
+#endif
+
+#ifdef DJGPP
+ /* Apparently necessary for djgpp 2.01. May cause problems with */
+ /* other versions. */
+ typedef long unsigned int caddr_t;
+#endif
+
+#ifdef PCR
+# include "il/PCR_IL.h"
+# include "th/PCR_ThCtl.h"
+# include "mm/PCR_MM.h"
+#endif
+
+#if !defined(NO_EXECUTE_PERMISSION)
+# define OPT_PROT_EXEC PROT_EXEC
+#else
+# define OPT_PROT_EXEC 0
+#endif
+
+#if defined(LINUX) && \
+ (defined(USE_PROC_FOR_LIBRARIES) || defined(IA64) || !defined(SMALL_CONFIG))
+
+/* We need to parse /proc/self/maps, either to find dynamic libraries, */
+/* and/or to find the register backing store base (IA64). Do it once */
+/* here. */
+
+#define READ read
+
+/* Repeatedly perform a read call until the buffer is filled or */
+/* we encounter EOF. */
+ssize_t GC_repeat_read(int fd, char *buf, size_t count)
+{
+ ssize_t num_read = 0;
+ ssize_t result;
+
+ while (num_read < count) {
+ result = READ(fd, buf + num_read, count - num_read);
+ if (result < 0) return result;
+ if (result == 0) break;
+ num_read += result;
+ }
+ return num_read;
+}
+
+/*
+ * Apply fn to a buffer containing the contents of /proc/self/maps.
+ * Return the result of fn or, if we failed, 0.
+ * We currently do nothing to /proc/self/maps other than simply read
+ * it. This code could be simplified if we could determine its size
+ * ahead of time.
+ */
+
+word GC_apply_to_maps(word (*fn)(char *))
+{
+ int f;
+ int result;
+ size_t maps_size = 4000; /* Initial guess. */
+ static char init_buf[1];
+ static char *maps_buf = init_buf;
+ static size_t maps_buf_sz = 1;
+
+ /* Read /proc/self/maps, growing maps_buf as necessary. */
+ /* Note that we may not allocate conventionally, and */
+ /* thus can't use stdio. */
+ do {
+ if (maps_size >= maps_buf_sz) {
+ /* Grow only by powers of 2, since we leak "too small" buffers. */
+ while (maps_size >= maps_buf_sz) maps_buf_sz *= 2;
+ maps_buf = GC_scratch_alloc(maps_buf_sz);
+ if (maps_buf == 0) return 0;
+ }
+ f = open("/proc/self/maps", O_RDONLY);
+ if (-1 == f) return 0;
+ maps_size = 0;
+ do {
+ result = GC_repeat_read(f, maps_buf, maps_buf_sz-1);
+ if (result <= 0) return 0;
+ maps_size += result;
+ } while (result == maps_buf_sz-1);
+ close(f);
+ } while (maps_size >= maps_buf_sz);
+ maps_buf[maps_size] = '\0';
+
+ /* Apply fn to result. */
+ return fn(maps_buf);
+}
+
+#endif /* Need GC_apply_to_maps */
+
+#if defined(LINUX) && (defined(USE_PROC_FOR_LIBRARIES) || defined(IA64))
+//
+// GC_parse_map_entry parses an entry from /proc/self/maps so we can
+// locate all writable data segments that belong to shared libraries.
+// The format of one of these entries and the fields we care about
+// is as follows:
+// XXXXXXXX-XXXXXXXX r-xp 00000000 30:05 260537 name of mapping...\n
+// ^^^^^^^^ ^^^^^^^^ ^^^^ ^^
+// start end prot maj_dev
+//
+// Note that since about auguat 2003 kernels, the columns no longer have
+// fixed offsets on 64-bit kernels. Hence we no longer rely on fixed offsets
+// anywhere, which is safer anyway.
+//
+
+/*
+ * Assign various fields of the first line in buf_ptr to *start, *end,
+ * *prot_buf and *maj_dev. Only *prot_buf may be set for unwritable maps.
+ */
+char *GC_parse_map_entry(char *buf_ptr, word *start, word *end,
+ char *prot_buf, unsigned int *maj_dev)
+{
+ char *start_start, *end_start, *prot_start, *maj_dev_start;
+ char *p;
+ char *endp;
+
+ if (buf_ptr == NULL || *buf_ptr == '\0') {
+ return NULL;
+ }
+
+ p = buf_ptr;
+ while (isspace(*p)) ++p;
+ start_start = p;
+ GC_ASSERT(isxdigit(*start_start));
+ *start = strtoul(start_start, &endp, 16); p = endp;
+ GC_ASSERT(*p=='-');
+
+ ++p;
+ end_start = p;
+ GC_ASSERT(isxdigit(*end_start));
+ *end = strtoul(end_start, &endp, 16); p = endp;
+ GC_ASSERT(isspace(*p));
+
+ while (isspace(*p)) ++p;
+ prot_start = p;
+ GC_ASSERT(*prot_start == 'r' || *prot_start == '-');
+ memcpy(prot_buf, prot_start, 4);
+ prot_buf[4] = '\0';
+ if (prot_buf[1] == 'w') {/* we can skip the rest if it's not writable. */
+ /* Skip past protection field to offset field */
+ while (!isspace(*p)) ++p; while (isspace(*p)) ++p;
+ GC_ASSERT(isxdigit(*p));
+ /* Skip past offset field, which we ignore */
+ while (!isspace(*p)) ++p; while (isspace(*p)) ++p;
+ maj_dev_start = p;
+ GC_ASSERT(isxdigit(*maj_dev_start));
+ *maj_dev = strtoul(maj_dev_start, NULL, 16);
+ }
+
+ while (*p && *p++ != '\n');
+
+ return p;
+}
+
+#endif /* Need to parse /proc/self/maps. */
+
+#if defined(SEARCH_FOR_DATA_START)
+ /* The I386 case can be handled without a search. The Alpha case */
+ /* used to be handled differently as well, but the rules changed */
+ /* for recent Linux versions. This seems to be the easiest way to */
+ /* cover all versions. */
+
+# ifdef LINUX
+ /* Some Linux distributions arrange to define __data_start. Some */
+ /* define data_start as a weak symbol. The latter is technically */
+ /* broken, since the user program may define data_start, in which */
+ /* case we lose. Nonetheless, we try both, prefering __data_start. */
+ /* We assume gcc-compatible pragmas. */
+# pragma weak __data_start
+ extern int __data_start[];
+# pragma weak data_start
+ extern int data_start[];
+# endif /* LINUX */
+ extern int _end[];
+
+ ptr_t GC_data_start;
+
+ void GC_init_linux_data_start()
+ {
+ extern ptr_t GC_find_limit();
+
+# ifdef LINUX
+ /* Try the easy approaches first: */
+ if ((ptr_t)__data_start != 0) {
+ GC_data_start = (ptr_t)(__data_start);
+ return;
+ }
+ if ((ptr_t)data_start != 0) {
+ GC_data_start = (ptr_t)(data_start);
+ return;
+ }
+# endif /* LINUX */
+ GC_data_start = GC_find_limit((ptr_t)(_end), FALSE);
+ }
+#endif
+
+# ifdef ECOS
+
+# ifndef ECOS_GC_MEMORY_SIZE
+# define ECOS_GC_MEMORY_SIZE (448 * 1024)
+# endif /* ECOS_GC_MEMORY_SIZE */
+
+// setjmp() function, as described in ANSI para 7.6.1.1
+#undef SETJMP
+#define SETJMP( __env__ ) hal_setjmp( __env__ )
+
+// FIXME: This is a simple way of allocating memory which is
+// compatible with ECOS early releases. Later releases use a more
+// sophisticated means of allocating memory than this simple static
+// allocator, but this method is at least bound to work.
+static char memory[ECOS_GC_MEMORY_SIZE];
+static char *brk = memory;
+
+static void *tiny_sbrk(ptrdiff_t increment)
+{
+ void *p = brk;
+
+ brk += increment;
+
+ if (brk > memory + sizeof memory)
+ {
+ brk -= increment;
+ return NULL;
+ }
+
+ return p;
+}
+#define sbrk tiny_sbrk
+# endif /* ECOS */
+
+#if (defined(NETBSD) || defined(OPENBSD)) && defined(__ELF__)
+ ptr_t GC_data_start;
+
+ void GC_init_netbsd_elf()
+ {
+ extern ptr_t GC_find_limit();
+ extern char **environ;
+ /* This may need to be environ, without the underscore, for */
+ /* some versions. */
+ GC_data_start = GC_find_limit((ptr_t)&environ, FALSE);
+ }
+#endif
+
+# ifdef OS2
+
+# include <stddef.h>
+
+# if !defined(__IBMC__) && !defined(__WATCOMC__) /* e.g. EMX */
+
+struct exe_hdr {
+ unsigned short magic_number;
+ unsigned short padding[29];
+ long new_exe_offset;
+};
+
+#define E_MAGIC(x) (x).magic_number
+#define EMAGIC 0x5A4D
+#define E_LFANEW(x) (x).new_exe_offset
+
+struct e32_exe {
+ unsigned char magic_number[2];
+ unsigned char byte_order;
+ unsigned char word_order;
+ unsigned long exe_format_level;
+ unsigned short cpu;
+ unsigned short os;
+ unsigned long padding1[13];
+ unsigned long object_table_offset;
+ unsigned long object_count;
+ unsigned long padding2[31];
+};
+
+#define E32_MAGIC1(x) (x).magic_number[0]
+#define E32MAGIC1 'L'
+#define E32_MAGIC2(x) (x).magic_number[1]
+#define E32MAGIC2 'X'
+#define E32_BORDER(x) (x).byte_order
+#define E32LEBO 0
+#define E32_WORDER(x) (x).word_order
+#define E32LEWO 0
+#define E32_CPU(x) (x).cpu
+#define E32CPU286 1
+#define E32_OBJTAB(x) (x).object_table_offset
+#define E32_OBJCNT(x) (x).object_count
+
+struct o32_obj {
+ unsigned long size;
+ unsigned long base;
+ unsigned long flags;
+ unsigned long pagemap;
+ unsigned long mapsize;
+ unsigned long reserved;
+};
+
+#define O32_FLAGS(x) (x).flags
+#define OBJREAD 0x0001L
+#define OBJWRITE 0x0002L
+#define OBJINVALID 0x0080L
+#define O32_SIZE(x) (x).size
+#define O32_BASE(x) (x).base
+
+# else /* IBM's compiler */
+
+/* A kludge to get around what appears to be a header file bug */
+# ifndef WORD
+# define WORD unsigned short
+# endif
+# ifndef DWORD
+# define DWORD unsigned long
+# endif
+
+# define EXE386 1
+# include <newexe.h>
+# include <exe386.h>
+
+# endif /* __IBMC__ */
+
+# define INCL_DOSEXCEPTIONS
+# define INCL_DOSPROCESS
+# define INCL_DOSERRORS
+# define INCL_DOSMODULEMGR
+# define INCL_DOSMEMMGR
+# include <os2.h>
+
+
+/* Disable and enable signals during nontrivial allocations */
+
+void GC_disable_signals(void)
+{
+ ULONG nest;
+
+ DosEnterMustComplete(&nest);
+ if (nest != 1) ABORT("nested GC_disable_signals");
+}
+
+void GC_enable_signals(void)
+{
+ ULONG nest;
+
+ DosExitMustComplete(&nest);
+ if (nest != 0) ABORT("GC_enable_signals");
+}
+
+
+# else
+
+# if !defined(PCR) && !defined(AMIGA) && !defined(MSWIN32) \
+ && !defined(MSWINCE) \
+ && !defined(MACOS) && !defined(DJGPP) && !defined(DOS4GW) \
+ && !defined(NOSYS) && !defined(ECOS)
+
+# if defined(sigmask) && !defined(UTS4) && !defined(HURD)
+ /* Use the traditional BSD interface */
+# define SIGSET_T int
+# define SIG_DEL(set, signal) (set) &= ~(sigmask(signal))
+# define SIG_FILL(set) (set) = 0x7fffffff
+ /* Setting the leading bit appears to provoke a bug in some */
+ /* longjmp implementations. Most systems appear not to have */
+ /* a signal 32. */
+# define SIGSETMASK(old, new) (old) = sigsetmask(new)
+# else
+ /* Use POSIX/SYSV interface */
+# define SIGSET_T sigset_t
+# define SIG_DEL(set, signal) sigdelset(&(set), (signal))
+# define SIG_FILL(set) sigfillset(&set)
+# define SIGSETMASK(old, new) sigprocmask(SIG_SETMASK, &(new), &(old))
+# endif
+
+static GC_bool mask_initialized = FALSE;
+
+static SIGSET_T new_mask;
+
+static SIGSET_T old_mask;
+
+static SIGSET_T dummy;
+
+#if defined(PRINTSTATS) && !defined(THREADS)
+# define CHECK_SIGNALS
+ int GC_sig_disabled = 0;
+#endif
+
+void GC_disable_signals()
+{
+ if (!mask_initialized) {
+ SIG_FILL(new_mask);
+
+ SIG_DEL(new_mask, SIGSEGV);
+ SIG_DEL(new_mask, SIGILL);
+ SIG_DEL(new_mask, SIGQUIT);
+# ifdef SIGBUS
+ SIG_DEL(new_mask, SIGBUS);
+# endif
+# ifdef SIGIOT
+ SIG_DEL(new_mask, SIGIOT);
+# endif
+# ifdef SIGEMT
+ SIG_DEL(new_mask, SIGEMT);
+# endif
+# ifdef SIGTRAP
+ SIG_DEL(new_mask, SIGTRAP);
+# endif
+ mask_initialized = TRUE;
+ }
+# ifdef CHECK_SIGNALS
+ if (GC_sig_disabled != 0) ABORT("Nested disables");
+ GC_sig_disabled++;
+# endif
+ SIGSETMASK(old_mask,new_mask);
+}
+
+void GC_enable_signals()
+{
+# ifdef CHECK_SIGNALS
+ if (GC_sig_disabled != 1) ABORT("Unmatched enable");
+ GC_sig_disabled--;
+# endif
+ SIGSETMASK(dummy,old_mask);
+}
+
+# endif /* !PCR */
+
+# endif /*!OS/2 */
+
+/* Ivan Demakov: simplest way (to me) */
+#if defined (DOS4GW)
+ void GC_disable_signals() { }
+ void GC_enable_signals() { }
+#endif
+
+/* Find the page size */
+word GC_page_size;
+
+# if defined(MSWIN32) || defined(MSWINCE)
+ void GC_setpagesize()
+ {
+ GetSystemInfo(&GC_sysinfo);
+ GC_page_size = GC_sysinfo.dwPageSize;
+ }
+
+# else
+# if defined(MPROTECT_VDB) || defined(PROC_VDB) || defined(USE_MMAP) \
+ || defined(USE_MUNMAP)
+ void GC_setpagesize()
+ {
+ GC_page_size = GETPAGESIZE();
+ }
+# else
+ /* It's acceptable to fake it. */
+ void GC_setpagesize()
+ {
+ GC_page_size = HBLKSIZE;
+ }
+# endif
+# endif
+
+/*
+ * Find the base of the stack.
+ * Used only in single-threaded environment.
+ * With threads, GC_mark_roots needs to know how to do this.
+ * Called with allocator lock held.
+ */
+# if defined(MSWIN32) || defined(MSWINCE)
+# define is_writable(prot) ((prot) == PAGE_READWRITE \
+ || (prot) == PAGE_WRITECOPY \
+ || (prot) == PAGE_EXECUTE_READWRITE \
+ || (prot) == PAGE_EXECUTE_WRITECOPY)
+/* Return the number of bytes that are writable starting at p. */
+/* The pointer p is assumed to be page aligned. */
+/* If base is not 0, *base becomes the beginning of the */
+/* allocation region containing p. */
+word GC_get_writable_length(ptr_t p, ptr_t *base)
+{
+ MEMORY_BASIC_INFORMATION buf;
+ word result;
+ word protect;
+
+ result = VirtualQuery(p, &buf, sizeof(buf));
+ if (result != sizeof(buf)) ABORT("Weird VirtualQuery result");
+ if (base != 0) *base = (ptr_t)(buf.AllocationBase);
+ protect = (buf.Protect & ~(PAGE_GUARD | PAGE_NOCACHE));
+ if (!is_writable(protect)) {
+ return(0);
+ }
+ if (buf.State != MEM_COMMIT) return(0);
+ return(buf.RegionSize);
+}
+
+ptr_t GC_get_stack_base()
+{
+ int dummy;
+ ptr_t sp = (ptr_t)(&dummy);
+ ptr_t trunc_sp = (ptr_t)((word)sp & ~(GC_page_size - 1));
+ word size = GC_get_writable_length(trunc_sp, 0);
+
+ return(trunc_sp + size);
+}
+
+
+# endif /* MS Windows */
+
+# ifdef BEOS
+# include <kernel/OS.h>
+ptr_t GC_get_stack_base(){
+ thread_info th;
+ get_thread_info(find_thread(NULL),&th);
+ return th.stack_end;
+}
+# endif /* BEOS */
+
+
+# ifdef OS2
+
+ptr_t GC_get_stack_base()
+{
+ PTIB ptib;
+ PPIB ppib;
+
+ if (DosGetInfoBlocks(&ptib, &ppib) != NO_ERROR) {
+ GC_err_printf0("DosGetInfoBlocks failed\n");
+ ABORT("DosGetInfoBlocks failed\n");
+ }
+ return((ptr_t)(ptib -> tib_pstacklimit));
+}
+
+# endif /* OS2 */
+
+# ifdef AMIGA
+# define GC_AMIGA_SB
+# include "AmigaOS.c"
+# undef GC_AMIGA_SB
+# endif /* AMIGA */
+
+# if defined(NEED_FIND_LIMIT) || defined(UNIX_LIKE)
+
+# ifdef __STDC__
+ typedef void (*handler)(int);
+# else
+ typedef void (*handler)();
+# endif
+
+# if defined(SUNOS5SIGS) || defined(IRIX5) || defined(OSF1) \
+ || defined(HURD) || defined(NETBSD)
+ static struct sigaction old_segv_act;
+# if defined(IRIX5) || defined(HPUX) \
+ || defined(HURD) || defined(NETBSD)
+ static struct sigaction old_bus_act;
+# endif
+# else
+ static handler old_segv_handler, old_bus_handler;
+# endif
+
+# ifdef __STDC__
+ void GC_set_and_save_fault_handler(handler h)
+# else
+ void GC_set_and_save_fault_handler(h)
+ handler h;
+# endif
+ {
+# if defined(SUNOS5SIGS) || defined(IRIX5) \
+ || defined(OSF1) || defined(HURD) || defined(NETBSD)
+ struct sigaction act;
+
+ act.sa_handler = h;
+# if 0 /* Was necessary for Solaris 2.3 and very temporary */
+ /* NetBSD bugs. */
+ act.sa_flags = SA_RESTART | SA_NODEFER;
+# else
+ act.sa_flags = SA_RESTART;
+# endif
+
+ (void) sigemptyset(&act.sa_mask);
+# ifdef GC_IRIX_THREADS
+ /* Older versions have a bug related to retrieving and */
+ /* and setting a handler at the same time. */
+ (void) sigaction(SIGSEGV, 0, &old_segv_act);
+ (void) sigaction(SIGSEGV, &act, 0);
+ (void) sigaction(SIGBUS, 0, &old_bus_act);
+ (void) sigaction(SIGBUS, &act, 0);
+# else
+ (void) sigaction(SIGSEGV, &act, &old_segv_act);
+# if defined(IRIX5) \
+ || defined(HPUX) || defined(HURD) || defined(NETBSD)
+ /* Under Irix 5.x or HP/UX, we may get SIGBUS. */
+ /* Pthreads doesn't exist under Irix 5.x, so we */
+ /* don't have to worry in the threads case. */
+ (void) sigaction(SIGBUS, &act, &old_bus_act);
+# endif
+# endif /* GC_IRIX_THREADS */
+# else
+ old_segv_handler = signal(SIGSEGV, h);
+# ifdef SIGBUS
+ old_bus_handler = signal(SIGBUS, h);
+# endif
+# endif
+ }
+# endif /* NEED_FIND_LIMIT || UNIX_LIKE */
+
+# ifdef NEED_FIND_LIMIT
+ /* Some tools to implement HEURISTIC2 */
+# define MIN_PAGE_SIZE 256 /* Smallest conceivable page size, bytes */
+ /* static */ JMP_BUF GC_jmp_buf;
+
+ /*ARGSUSED*/
+ void GC_fault_handler(sig)
+ int sig;
+ {
+ LONGJMP(GC_jmp_buf, 1);
+ }
+
+ void GC_setup_temporary_fault_handler()
+ {
+ GC_set_and_save_fault_handler(GC_fault_handler);
+ }
+
+ void GC_reset_fault_handler()
+ {
+# if defined(SUNOS5SIGS) || defined(IRIX5) \
+ || defined(OSF1) || defined(HURD) || defined(NETBSD)
+ (void) sigaction(SIGSEGV, &old_segv_act, 0);
+# if defined(IRIX5) \
+ || defined(HPUX) || defined(HURD) || defined(NETBSD)
+ (void) sigaction(SIGBUS, &old_bus_act, 0);
+# endif
+# else
+ (void) signal(SIGSEGV, old_segv_handler);
+# ifdef SIGBUS
+ (void) signal(SIGBUS, old_bus_handler);
+# endif
+# endif
+ }
+
+ /* Return the first nonaddressible location > p (up) or */
+ /* the smallest location q s.t. [q,p) is addressable (!up). */
+ /* We assume that p (up) or p-1 (!up) is addressable. */
+ ptr_t GC_find_limit(p, up)
+ ptr_t p;
+ GC_bool up;
+ {
+ static VOLATILE ptr_t result;
+ /* Needs to be static, since otherwise it may not be */
+ /* preserved across the longjmp. Can safely be */
+ /* static since it's only called once, with the */
+ /* allocation lock held. */
+
+
+ GC_setup_temporary_fault_handler();
+ if (SETJMP(GC_jmp_buf) == 0) {
+ result = (ptr_t)(((word)(p))
+ & ~(MIN_PAGE_SIZE-1));
+ for (;;) {
+ if (up) {
+ result += MIN_PAGE_SIZE;
+ } else {
+ result -= MIN_PAGE_SIZE;
+ }
+ GC_noop1((word)(*result));
+ }
+ }
+ GC_reset_fault_handler();
+ if (!up) {
+ result += MIN_PAGE_SIZE;
+ }
+ return(result);
+ }
+# endif
+
+#if defined(ECOS) || defined(NOSYS)
+ ptr_t GC_get_stack_base()
+ {
+ return STACKBOTTOM;
+ }
+#endif
+
+#ifdef HPUX_STACKBOTTOM
+
+#include <sys/param.h>
+#include <sys/pstat.h>
+
+ ptr_t GC_get_register_stack_base(void)
+ {
+ struct pst_vm_status vm_status;
+
+ int i = 0;
+ while (pstat_getprocvm(&vm_status, sizeof(vm_status), 0, i++) == 1) {
+ if (vm_status.pst_type == PS_RSESTACK) {
+ return (ptr_t) vm_status.pst_vaddr;
+ }
+ }
+
+ /* old way to get the register stackbottom */
+ return (ptr_t)(((word)GC_stackbottom - BACKING_STORE_DISPLACEMENT - 1)
+ & ~(BACKING_STORE_ALIGNMENT - 1));
+ }
+
+#endif /* HPUX_STACK_BOTTOM */
+
+#ifdef LINUX_STACKBOTTOM
+
+#include <sys/types.h>
+#include <sys/stat.h>
+
+# define STAT_SKIP 27 /* Number of fields preceding startstack */
+ /* field in /proc/self/stat */
+
+#ifdef USE_LIBC_PRIVATES
+# pragma weak __libc_stack_end
+ extern ptr_t __libc_stack_end;
+#endif
+
+# ifdef IA64
+ /* Try to read the backing store base from /proc/self/maps. */
+ /* We look for the writable mapping with a 0 major device, */
+ /* which is as close to our frame as possible, but below it.*/
+ static word backing_store_base_from_maps(char *maps)
+ {
+ char prot_buf[5];
+ char *buf_ptr = maps;
+ word start, end;
+ unsigned int maj_dev;
+ word current_best = 0;
+ word dummy;
+
+ for (;;) {
+ buf_ptr = GC_parse_map_entry(buf_ptr, &start, &end, prot_buf, &maj_dev);
+ if (buf_ptr == NULL) return current_best;
+ if (prot_buf[1] == 'w' && maj_dev == 0) {
+ if (end < (word)(&dummy) && start > current_best) current_best = start;
+ }
+ }
+ return current_best;
+ }
+
+ static word backing_store_base_from_proc(void)
+ {
+ return GC_apply_to_maps(backing_store_base_from_maps);
+ }
+
+# ifdef USE_LIBC_PRIVATES
+# pragma weak __libc_ia64_register_backing_store_base
+ extern ptr_t __libc_ia64_register_backing_store_base;
+# endif
+
+ ptr_t GC_get_register_stack_base(void)
+ {
+# ifdef USE_LIBC_PRIVATES
+ if (0 != &__libc_ia64_register_backing_store_base
+ && 0 != __libc_ia64_register_backing_store_base) {
+ /* Glibc 2.2.4 has a bug such that for dynamically linked */
+ /* executables __libc_ia64_register_backing_store_base is */
+ /* defined but uninitialized during constructor calls. */
+ /* Hence we check for both nonzero address and value. */
+ return __libc_ia64_register_backing_store_base;
+ }
+# endif
+ word result = backing_store_base_from_proc();
+ if (0 == result) {
+ /* Use dumb heuristics. Works only for default configuration. */
+ result = (word)GC_stackbottom - BACKING_STORE_DISPLACEMENT;
+ result += BACKING_STORE_ALIGNMENT - 1;
+ result &= ~(BACKING_STORE_ALIGNMENT - 1);
+ /* Verify that it's at least readable. If not, we goofed. */
+ GC_noop1(*(word *)result);
+ }
+ return (ptr_t)result;
+ }
+# endif
+
+ ptr_t GC_linux_stack_base(void)
+ {
+ /* We read the stack base value from /proc/self/stat. We do this */
+ /* using direct I/O system calls in order to avoid calling malloc */
+ /* in case REDIRECT_MALLOC is defined. */
+# define STAT_BUF_SIZE 4096
+# define STAT_READ read
+ /* Should probably call the real read, if read is wrapped. */
+ char stat_buf[STAT_BUF_SIZE];
+ int f;
+ char c;
+ word result = 0;
+ size_t i, buf_offset = 0;
+
+ /* First try the easy way. This should work for glibc 2.2 */
+ /* This fails in a prelinked ("prelink" command) executable */
+ /* since the correct value of __libc_stack_end never */
+ /* becomes visible to us. The second test works around */
+ /* this. */
+# ifdef USE_LIBC_PRIVATES
+ if (0 != &__libc_stack_end && 0 != __libc_stack_end ) {
+# ifdef IA64
+ /* Some versions of glibc set the address 16 bytes too */
+ /* low while the initialization code is running. */
+ if (((word)__libc_stack_end & 0xfff) + 0x10 < 0x1000) {
+ return __libc_stack_end + 0x10;
+ } /* Otherwise it's not safe to add 16 bytes and we fall */
+ /* back to using /proc. */
+# else
+# ifdef SPARC
+ /* Older versions of glibc for 64-bit Sparc do not set
+ * this variable correctly, it gets set to either zero
+ * or one.
+ */
+ if (__libc_stack_end != (ptr_t) (unsigned long)0x1)
+ return __libc_stack_end;
+# else
+ return __libc_stack_end;
+# endif
+# endif
+ }
+# endif
+ f = open("/proc/self/stat", O_RDONLY);
+ if (f < 0 || STAT_READ(f, stat_buf, STAT_BUF_SIZE) < 2 * STAT_SKIP) {
+ ABORT("Couldn't read /proc/self/stat");
+ }
+ c = stat_buf[buf_offset++];
+ /* Skip the required number of fields. This number is hopefully */
+ /* constant across all Linux implementations. */
+ for (i = 0; i < STAT_SKIP; ++i) {
+ while (isspace(c)) c = stat_buf[buf_offset++];
+ while (!isspace(c)) c = stat_buf[buf_offset++];
+ }
+ while (isspace(c)) c = stat_buf[buf_offset++];
+ while (isdigit(c)) {
+ result *= 10;
+ result += c - '0';
+ c = stat_buf[buf_offset++];
+ }
+ close(f);
+ if (result < 0x10000000) ABORT("Absurd stack bottom value");
+ return (ptr_t)result;
+ }
+
+#endif /* LINUX_STACKBOTTOM */
+
+#ifdef FREEBSD_STACKBOTTOM
+
+/* This uses an undocumented sysctl call, but at least one expert */
+/* believes it will stay. */
+
+#include <unistd.h>
+#include <sys/types.h>
+#include <sys/sysctl.h>
+
+ ptr_t GC_freebsd_stack_base(void)
+ {
+ int nm[2] = {CTL_KERN, KERN_USRSTACK};
+ ptr_t base;
+ size_t len = sizeof(ptr_t);
+ int r = sysctl(nm, 2, &base, &len, NULL, 0);
+
+ if (r) ABORT("Error getting stack base");
+
+ return base;
+ }
+
+#endif /* FREEBSD_STACKBOTTOM */
+
+#if !defined(BEOS) && !defined(AMIGA) && !defined(MSWIN32) \
+ && !defined(MSWINCE) && !defined(OS2) && !defined(NOSYS) && !defined(ECOS)
+
+ptr_t GC_get_stack_base()
+{
+# if defined(HEURISTIC1) || defined(HEURISTIC2) || \
+ defined(LINUX_STACKBOTTOM) || defined(FREEBSD_STACKBOTTOM)
+ word dummy;
+ ptr_t result;
+# endif
+
+# define STACKBOTTOM_ALIGNMENT_M1 ((word)STACK_GRAN - 1)
+
+# ifdef STACKBOTTOM
+ return(STACKBOTTOM);
+# else
+# ifdef HEURISTIC1
+# ifdef STACK_GROWS_DOWN
+ result = (ptr_t)((((word)(&dummy))
+ + STACKBOTTOM_ALIGNMENT_M1)
+ & ~STACKBOTTOM_ALIGNMENT_M1);
+# else
+ result = (ptr_t)(((word)(&dummy))
+ & ~STACKBOTTOM_ALIGNMENT_M1);
+# endif
+# endif /* HEURISTIC1 */
+# ifdef LINUX_STACKBOTTOM
+ result = GC_linux_stack_base();
+# endif
+# ifdef FREEBSD_STACKBOTTOM
+ result = GC_freebsd_stack_base();
+# endif
+# ifdef HEURISTIC2
+# ifdef STACK_GROWS_DOWN
+ result = GC_find_limit((ptr_t)(&dummy), TRUE);
+# ifdef HEURISTIC2_LIMIT
+ if (result > HEURISTIC2_LIMIT
+ && (ptr_t)(&dummy) < HEURISTIC2_LIMIT) {
+ result = HEURISTIC2_LIMIT;
+ }
+# endif
+# else
+ result = GC_find_limit((ptr_t)(&dummy), FALSE);
+# ifdef HEURISTIC2_LIMIT
+ if (result < HEURISTIC2_LIMIT
+ && (ptr_t)(&dummy) > HEURISTIC2_LIMIT) {
+ result = HEURISTIC2_LIMIT;
+ }
+# endif
+# endif
+
+# endif /* HEURISTIC2 */
+# ifdef STACK_GROWS_DOWN
+ if (result == 0) result = (ptr_t)(signed_word)(-sizeof(ptr_t));
+# endif
+ return(result);
+# endif /* STACKBOTTOM */
+}
+
+# endif /* ! AMIGA, !OS 2, ! MS Windows, !BEOS, !NOSYS, !ECOS */
+
+/*
+ * Register static data segment(s) as roots.
+ * If more data segments are added later then they need to be registered
+ * add that point (as we do with SunOS dynamic loading),
+ * or GC_mark_roots needs to check for them (as we do with PCR).
+ * Called with allocator lock held.
+ */
+
+# ifdef OS2
+
+void GC_register_data_segments()
+{
+ PTIB ptib;
+ PPIB ppib;
+ HMODULE module_handle;
+# define PBUFSIZ 512
+ UCHAR path[PBUFSIZ];
+ FILE * myexefile;
+ struct exe_hdr hdrdos; /* MSDOS header. */
+ struct e32_exe hdr386; /* Real header for my executable */
+ struct o32_obj seg; /* Currrent segment */
+ int nsegs;
+
+
+ if (DosGetInfoBlocks(&ptib, &ppib) != NO_ERROR) {
+ GC_err_printf0("DosGetInfoBlocks failed\n");
+ ABORT("DosGetInfoBlocks failed\n");
+ }
+ module_handle = ppib -> pib_hmte;
+ if (DosQueryModuleName(module_handle, PBUFSIZ, path) != NO_ERROR) {
+ GC_err_printf0("DosQueryModuleName failed\n");
+ ABORT("DosGetInfoBlocks failed\n");
+ }
+ myexefile = fopen(path, "rb");
+ if (myexefile == 0) {
+ GC_err_puts("Couldn't open executable ");
+ GC_err_puts(path); GC_err_puts("\n");
+ ABORT("Failed to open executable\n");
+ }
+ if (fread((char *)(&hdrdos), 1, sizeof hdrdos, myexefile) < sizeof hdrdos) {
+ GC_err_puts("Couldn't read MSDOS header from ");
+ GC_err_puts(path); GC_err_puts("\n");
+ ABORT("Couldn't read MSDOS header");
+ }
+ if (E_MAGIC(hdrdos) != EMAGIC) {
+ GC_err_puts("Executable has wrong DOS magic number: ");
+ GC_err_puts(path); GC_err_puts("\n");
+ ABORT("Bad DOS magic number");
+ }
+ if (fseek(myexefile, E_LFANEW(hdrdos), SEEK_SET) != 0) {
+ GC_err_puts("Seek to new header failed in ");
+ GC_err_puts(path); GC_err_puts("\n");
+ ABORT("Bad DOS magic number");
+ }
+ if (fread((char *)(&hdr386), 1, sizeof hdr386, myexefile) < sizeof hdr386) {
+ GC_err_puts("Couldn't read MSDOS header from ");
+ GC_err_puts(path); GC_err_puts("\n");
+ ABORT("Couldn't read OS/2 header");
+ }
+ if (E32_MAGIC1(hdr386) != E32MAGIC1 || E32_MAGIC2(hdr386) != E32MAGIC2) {
+ GC_err_puts("Executable has wrong OS/2 magic number:");
+ GC_err_puts(path); GC_err_puts("\n");
+ ABORT("Bad OS/2 magic number");
+ }
+ if ( E32_BORDER(hdr386) != E32LEBO || E32_WORDER(hdr386) != E32LEWO) {
+ GC_err_puts("Executable %s has wrong byte order: ");
+ GC_err_puts(path); GC_err_puts("\n");
+ ABORT("Bad byte order");
+ }
+ if ( E32_CPU(hdr386) == E32CPU286) {
+ GC_err_puts("GC can't handle 80286 executables: ");
+ GC_err_puts(path); GC_err_puts("\n");
+ EXIT();
+ }
+ if (fseek(myexefile, E_LFANEW(hdrdos) + E32_OBJTAB(hdr386),
+ SEEK_SET) != 0) {
+ GC_err_puts("Seek to object table failed: ");
+ GC_err_puts(path); GC_err_puts("\n");
+ ABORT("Seek to object table failed");
+ }
+ for (nsegs = E32_OBJCNT(hdr386); nsegs > 0; nsegs--) {
+ int flags;
+ if (fread((char *)(&seg), 1, sizeof seg, myexefile) < sizeof seg) {
+ GC_err_puts("Couldn't read obj table entry from ");
+ GC_err_puts(path); GC_err_puts("\n");
+ ABORT("Couldn't read obj table entry");
+ }
+ flags = O32_FLAGS(seg);
+ if (!(flags & OBJWRITE)) continue;
+ if (!(flags & OBJREAD)) continue;
+ if (flags & OBJINVALID) {
+ GC_err_printf0("Object with invalid pages?\n");
+ continue;
+ }
+ GC_add_roots_inner(O32_BASE(seg), O32_BASE(seg)+O32_SIZE(seg), FALSE);
+ }
+}
+
+# else /* !OS2 */
+
+# if defined(MSWIN32) || defined(MSWINCE)
+
+# ifdef MSWIN32
+ /* Unfortunately, we have to handle win32s very differently from NT, */
+ /* Since VirtualQuery has very different semantics. In particular, */
+ /* under win32s a VirtualQuery call on an unmapped page returns an */
+ /* invalid result. Under NT, GC_register_data_segments is a noop and */
+ /* all real work is done by GC_register_dynamic_libraries. Under */
+ /* win32s, we cannot find the data segments associated with dll's. */
+ /* We register the main data segment here. */
+ GC_bool GC_no_win32_dlls = FALSE;
+ /* This used to be set for gcc, to avoid dealing with */
+ /* the structured exception handling issues. But we now have */
+ /* assembly code to do that right. */
+ GC_bool GC_wnt = FALSE;
+ /* This is a Windows NT derivative, i.e. NT, W2K, XP or later. */
+
+ void GC_init_win32()
+ {
+ /* if we're running under win32s, assume that no DLLs will be loaded */
+ DWORD v = GetVersion();
+ GC_wnt = !(v & 0x80000000);
+ GC_no_win32_dlls |= ((!GC_wnt) && (v & 0xff) <= 3);
+ }
+
+ /* Return the smallest address a such that VirtualQuery */
+ /* returns correct results for all addresses between a and start. */
+ /* Assumes VirtualQuery returns correct information for start. */
+ ptr_t GC_least_described_address(ptr_t start)
+ {
+ MEMORY_BASIC_INFORMATION buf;
+ DWORD result;
+ LPVOID limit;
+ ptr_t p;
+ LPVOID q;
+
+ limit = GC_sysinfo.lpMinimumApplicationAddress;
+ p = (ptr_t)((word)start & ~(GC_page_size - 1));
+ for (;;) {
+ q = (LPVOID)(p - GC_page_size);
+ if ((ptr_t)q > (ptr_t)p /* underflow */ || q < limit) break;
+ result = VirtualQuery(q, &buf, sizeof(buf));
+ if (result != sizeof(buf) || buf.AllocationBase == 0) break;
+ p = (ptr_t)(buf.AllocationBase);
+ }
+ return(p);
+ }
+# endif
+
+# ifndef REDIRECT_MALLOC
+ /* We maintain a linked list of AllocationBase values that we know */
+ /* correspond to malloc heap sections. Currently this is only called */
+ /* during a GC. But there is some hope that for long running */
+ /* programs we will eventually see most heap sections. */
+
+ /* In the long run, it would be more reliable to occasionally walk */
+ /* the malloc heap with HeapWalk on the default heap. But that */
+ /* apparently works only for NT-based Windows. */
+
+ /* In the long run, a better data structure would also be nice ... */
+ struct GC_malloc_heap_list {
+ void * allocation_base;
+ struct GC_malloc_heap_list *next;
+ } *GC_malloc_heap_l = 0;
+
+ /* Is p the base of one of the malloc heap sections we already know */
+ /* about? */
+ GC_bool GC_is_malloc_heap_base(ptr_t p)
+ {
+ struct GC_malloc_heap_list *q = GC_malloc_heap_l;
+
+ while (0 != q) {
+ if (q -> allocation_base == p) return TRUE;
+ q = q -> next;
+ }
+ return FALSE;
+ }
+
+ void *GC_get_allocation_base(void *p)
+ {
+ MEMORY_BASIC_INFORMATION buf;
+ DWORD result = VirtualQuery(p, &buf, sizeof(buf));
+ if (result != sizeof(buf)) {
+ ABORT("Weird VirtualQuery result");
+ }
+ return buf.AllocationBase;
+ }
+
+ size_t GC_max_root_size = 100000; /* Appr. largest root size. */
+
+ void GC_add_current_malloc_heap()
+ {
+ struct GC_malloc_heap_list *new_l =
+ malloc(sizeof(struct GC_malloc_heap_list));
+ void * candidate = GC_get_allocation_base(new_l);
+
+ if (new_l == 0) return;
+ if (GC_is_malloc_heap_base(candidate)) {
+ /* Try a little harder to find malloc heap. */
+ size_t req_size = 10000;
+ do {
+ void *p = malloc(req_size);
+ if (0 == p) { free(new_l); return; }
+ candidate = GC_get_allocation_base(p);
+ free(p);
+ req_size *= 2;
+ } while (GC_is_malloc_heap_base(candidate)
+ && req_size < GC_max_root_size/10 && req_size < 500000);
+ if (GC_is_malloc_heap_base(candidate)) {
+ free(new_l); return;
+ }
+ }
+# ifdef CONDPRINT
+ if (GC_print_stats)
+ GC_printf1("Found new system malloc AllocationBase at 0x%lx\n",
+ candidate);
+# endif
+ new_l -> allocation_base = candidate;
+ new_l -> next = GC_malloc_heap_l;
+ GC_malloc_heap_l = new_l;
+ }
+# endif /* REDIRECT_MALLOC */
+
+ /* Is p the start of either the malloc heap, or of one of our */
+ /* heap sections? */
+ GC_bool GC_is_heap_base (ptr_t p)
+ {
+
+ unsigned i;
+
+# ifndef REDIRECT_MALLOC
+ static word last_gc_no = -1;
+
+ if (last_gc_no != GC_gc_no) {
+ GC_add_current_malloc_heap();
+ last_gc_no = GC_gc_no;
+ }
+ if (GC_root_size > GC_max_root_size) GC_max_root_size = GC_root_size;
+ if (GC_is_malloc_heap_base(p)) return TRUE;
+# endif
+ for (i = 0; i < GC_n_heap_bases; i++) {
+ if (GC_heap_bases[i] == p) return TRUE;
+ }
+ return FALSE ;
+ }
+
+# ifdef MSWIN32
+ void GC_register_root_section(ptr_t static_root)
+ {
+ MEMORY_BASIC_INFORMATION buf;
+ DWORD result;
+ DWORD protect;
+ LPVOID p;
+ char * base;
+ char * limit, * new_limit;
+
+ if (!GC_no_win32_dlls) return;
+ p = base = limit = GC_least_described_address(static_root);
+ while (p < GC_sysinfo.lpMaximumApplicationAddress) {
+ result = VirtualQuery(p, &buf, sizeof(buf));
+ if (result != sizeof(buf) || buf.AllocationBase == 0
+ || GC_is_heap_base(buf.AllocationBase)) break;
+ new_limit = (char *)p + buf.RegionSize;
+ protect = buf.Protect;
+ if (buf.State == MEM_COMMIT
+ && is_writable(protect)) {
+ if ((char *)p == limit) {
+ limit = new_limit;
+ } else {
+ if (base != limit) GC_add_roots_inner(base, limit, FALSE);
+ base = p;
+ limit = new_limit;
+ }
+ }
+ if (p > (LPVOID)new_limit /* overflow */) break;
+ p = (LPVOID)new_limit;
+ }
+ if (base != limit) GC_add_roots_inner(base, limit, FALSE);
+ }
+#endif
+
+ void GC_register_data_segments()
+ {
+# ifdef MSWIN32
+ static char dummy;
+ GC_register_root_section((ptr_t)(&dummy));
+# endif
+ }
+
+# else /* !OS2 && !Windows */
+
+# if (defined(SVR4) || defined(AUX) || defined(DGUX) \
+ || (defined(LINUX) && defined(SPARC))) && !defined(PCR)
+ptr_t GC_SysVGetDataStart(max_page_size, etext_addr)
+int max_page_size;
+int * etext_addr;
+{
+ word text_end = ((word)(etext_addr) + sizeof(word) - 1)
+ & ~(sizeof(word) - 1);
+ /* etext rounded to word boundary */
+ word next_page = ((text_end + (word)max_page_size - 1)
+ & ~((word)max_page_size - 1));
+ word page_offset = (text_end & ((word)max_page_size - 1));
+ VOLATILE char * result = (char *)(next_page + page_offset);
+ /* Note that this isnt equivalent to just adding */
+ /* max_page_size to &etext if &etext is at a page boundary */
+
+ GC_setup_temporary_fault_handler();
+ if (SETJMP(GC_jmp_buf) == 0) {
+ /* Try writing to the address. */
+ *result = *result;
+ GC_reset_fault_handler();
+ } else {
+ GC_reset_fault_handler();
+ /* We got here via a longjmp. The address is not readable. */
+ /* This is known to happen under Solaris 2.4 + gcc, which place */
+ /* string constants in the text segment, but after etext. */
+ /* Use plan B. Note that we now know there is a gap between */
+ /* text and data segments, so plan A bought us something. */
+ result = (char *)GC_find_limit((ptr_t)(DATAEND), FALSE);
+ }
+ return((ptr_t)result);
+}
+# endif
+
+# if defined(FREEBSD) && (defined(I386) || defined(X86_64) || defined(powerpc) || defined(__powerpc__)) && !defined(PCR)
+/* Its unclear whether this should be identical to the above, or */
+/* whether it should apply to non-X86 architectures. */
+/* For now we don't assume that there is always an empty page after */
+/* etext. But in some cases there actually seems to be slightly more. */
+/* This also deals with holes between read-only data and writable data. */
+ptr_t GC_FreeBSDGetDataStart(max_page_size, etext_addr)
+int max_page_size;
+int * etext_addr;
+{
+ word text_end = ((word)(etext_addr) + sizeof(word) - 1)
+ & ~(sizeof(word) - 1);
+ /* etext rounded to word boundary */
+ VOLATILE word next_page = (text_end + (word)max_page_size - 1)
+ & ~((word)max_page_size - 1);
+ VOLATILE ptr_t result = (ptr_t)text_end;
+ GC_setup_temporary_fault_handler();
+ if (SETJMP(GC_jmp_buf) == 0) {
+ /* Try reading at the address. */
+ /* This should happen before there is another thread. */
+ for (; next_page < (word)(DATAEND); next_page += (word)max_page_size)
+ *(VOLATILE char *)next_page;
+ GC_reset_fault_handler();
+ } else {
+ GC_reset_fault_handler();
+ /* As above, we go to plan B */
+ result = GC_find_limit((ptr_t)(DATAEND), FALSE);
+ }
+ return(result);
+}
+
+# endif
+
+
+#ifdef AMIGA
+
+# define GC_AMIGA_DS
+# include "AmigaOS.c"
+# undef GC_AMIGA_DS
+
+#else /* !OS2 && !Windows && !AMIGA */
+
+void GC_register_data_segments()
+{
+# if !defined(PCR) && !defined(SRC_M3) && !defined(MACOS)
+# if defined(REDIRECT_MALLOC) && defined(GC_SOLARIS_THREADS)
+ /* As of Solaris 2.3, the Solaris threads implementation */
+ /* allocates the data structure for the initial thread with */
+ /* sbrk at process startup. It needs to be scanned, so that */
+ /* we don't lose some malloc allocated data structures */
+ /* hanging from it. We're on thin ice here ... */
+ extern caddr_t sbrk();
+
+ GC_add_roots_inner(DATASTART, (char *)sbrk(0), FALSE);
+# else
+ GC_add_roots_inner(DATASTART, (char *)(DATAEND), FALSE);
+# if defined(DATASTART2)
+ GC_add_roots_inner(DATASTART2, (char *)(DATAEND2), FALSE);
+# endif
+# endif
+# endif
+# if defined(MACOS)
+ {
+# if defined(THINK_C)
+ extern void* GC_MacGetDataStart(void);
+ /* globals begin above stack and end at a5. */
+ GC_add_roots_inner((ptr_t)GC_MacGetDataStart(),
+ (ptr_t)LMGetCurrentA5(), FALSE);
+# else
+# if defined(__MWERKS__)
+# if !__POWERPC__
+ extern void* GC_MacGetDataStart(void);
+ /* MATTHEW: Function to handle Far Globals (CW Pro 3) */
+# if __option(far_data)
+ extern void* GC_MacGetDataEnd(void);
+# endif
+ /* globals begin above stack and end at a5. */
+ GC_add_roots_inner((ptr_t)GC_MacGetDataStart(),
+ (ptr_t)LMGetCurrentA5(), FALSE);
+ /* MATTHEW: Handle Far Globals */
+# if __option(far_data)
+ /* Far globals follow he QD globals: */
+ GC_add_roots_inner((ptr_t)LMGetCurrentA5(),
+ (ptr_t)GC_MacGetDataEnd(), FALSE);
+# endif
+# else
+ extern char __data_start__[], __data_end__[];
+ GC_add_roots_inner((ptr_t)&__data_start__,
+ (ptr_t)&__data_end__, FALSE);
+# endif /* __POWERPC__ */
+# endif /* __MWERKS__ */
+# endif /* !THINK_C */
+ }
+# endif /* MACOS */
+
+ /* Dynamic libraries are added at every collection, since they may */
+ /* change. */
+}
+
+# endif /* ! AMIGA */
+# endif /* ! MSWIN32 && ! MSWINCE*/
+# endif /* ! OS2 */
+
+/*
+ * Auxiliary routines for obtaining memory from OS.
+ */
+
+# if !defined(OS2) && !defined(PCR) && !defined(AMIGA) \
+ && !defined(MSWIN32) && !defined(MSWINCE) \
+ && !defined(MACOS) && !defined(DOS4GW)
+
+# ifdef SUNOS4
+ extern caddr_t sbrk();
+# endif
+# ifdef __STDC__
+# define SBRK_ARG_T ptrdiff_t
+# else
+# define SBRK_ARG_T int
+# endif
+
+
+# if 0 && defined(RS6000) /* We now use mmap */
+/* The compiler seems to generate speculative reads one past the end of */
+/* an allocated object. Hence we need to make sure that the page */
+/* following the last heap page is also mapped. */
+ptr_t GC_unix_get_mem(bytes)
+word bytes;
+{
+ caddr_t cur_brk = (caddr_t)sbrk(0);
+ caddr_t result;
+ SBRK_ARG_T lsbs = (word)cur_brk & (GC_page_size-1);
+ static caddr_t my_brk_val = 0;
+
+ if ((SBRK_ARG_T)bytes < 0) return(0); /* too big */
+ if (lsbs != 0) {
+ if((caddr_t)(sbrk(GC_page_size - lsbs)) == (caddr_t)(-1)) return(0);
+ }
+ if (cur_brk == my_brk_val) {
+ /* Use the extra block we allocated last time. */
+ result = (ptr_t)sbrk((SBRK_ARG_T)bytes);
+ if (result == (caddr_t)(-1)) return(0);
+ result -= GC_page_size;
+ } else {
+ result = (ptr_t)sbrk(GC_page_size + (SBRK_ARG_T)bytes);
+ if (result == (caddr_t)(-1)) return(0);
+ }
+ my_brk_val = result + bytes + GC_page_size; /* Always page aligned */
+ return((ptr_t)result);
+}
+
+#else /* Not RS6000 */
+
+#if defined(USE_MMAP) || defined(USE_MUNMAP)
+
+#ifdef USE_MMAP_FIXED
+# define GC_MMAP_FLAGS MAP_FIXED | MAP_PRIVATE
+ /* Seems to yield better performance on Solaris 2, but can */
+ /* be unreliable if something is already mapped at the address. */
+#else
+# define GC_MMAP_FLAGS MAP_PRIVATE
+#endif
+
+#ifdef USE_MMAP_ANON
+# define zero_fd -1
+# if defined(MAP_ANONYMOUS)
+# define OPT_MAP_ANON MAP_ANONYMOUS
+# else
+# define OPT_MAP_ANON MAP_ANON
+# endif
+#else
+ static int zero_fd;
+# define OPT_MAP_ANON 0
+#endif
+
+#endif /* defined(USE_MMAP) || defined(USE_MUNMAP) */
+
+#if defined(USE_MMAP)
+/* Tested only under Linux, IRIX5 and Solaris 2 */
+
+#ifndef HEAP_START
+# define HEAP_START 0
+#endif
+
+ptr_t GC_unix_get_mem(bytes)
+word bytes;
+{
+ void *result;
+ static ptr_t last_addr = HEAP_START;
+
+# ifndef USE_MMAP_ANON
+ static GC_bool initialized = FALSE;
+
+ if (!initialized) {
+ zero_fd = open("/dev/zero", O_RDONLY);
+ fcntl(zero_fd, F_SETFD, FD_CLOEXEC);
+ initialized = TRUE;
+ }
+# endif
+
+ if (bytes & (GC_page_size -1)) ABORT("Bad GET_MEM arg");
+ result = mmap(last_addr, bytes, PROT_READ | PROT_WRITE | OPT_PROT_EXEC,
+ GC_MMAP_FLAGS | OPT_MAP_ANON, zero_fd, 0/* offset */);
+ if (result == MAP_FAILED) return(0);
+ last_addr = (ptr_t)result + bytes + GC_page_size - 1;
+ last_addr = (ptr_t)((word)last_addr & ~(GC_page_size - 1));
+# if !defined(LINUX)
+ if (last_addr == 0) {
+ /* Oops. We got the end of the address space. This isn't */
+ /* usable by arbitrary C code, since one-past-end pointers */
+ /* don't work, so we discard it and try again. */
+ munmap(result, (size_t)(-GC_page_size) - (size_t)result);
+ /* Leave last page mapped, so we can't repeat. */
+ return GC_unix_get_mem(bytes);
+ }
+# else
+ GC_ASSERT(last_addr != 0);
+# endif
+ return((ptr_t)result);
+}
+
+#else /* Not RS6000, not USE_MMAP */
+ptr_t GC_unix_get_mem(bytes)
+word bytes;
+{
+ ptr_t result;
+# ifdef IRIX5
+ /* Bare sbrk isn't thread safe. Play by malloc rules. */
+ /* The equivalent may be needed on other systems as well. */
+ __LOCK_MALLOC();
+# endif
+ {
+ ptr_t cur_brk = (ptr_t)sbrk(0);
+ SBRK_ARG_T lsbs = (word)cur_brk & (GC_page_size-1);
+
+ if ((SBRK_ARG_T)bytes < 0) return(0); /* too big */
+ if (lsbs != 0) {
+ if((ptr_t)sbrk(GC_page_size - lsbs) == (ptr_t)(-1)) return(0);
+ }
+ result = (ptr_t)sbrk((SBRK_ARG_T)bytes);
+ if (result == (ptr_t)(-1)) result = 0;
+ }
+# ifdef IRIX5
+ __UNLOCK_MALLOC();
+# endif
+ return(result);
+}
+
+#endif /* Not USE_MMAP */
+#endif /* Not RS6000 */
+
+# endif /* UN*X */
+
+# ifdef OS2
+
+void * os2_alloc(size_t bytes)
+{
+ void * result;
+
+ if (DosAllocMem(&result, bytes, PAG_EXECUTE | PAG_READ |
+ PAG_WRITE | PAG_COMMIT)
+ != NO_ERROR) {
+ return(0);
+ }
+ if (result == 0) return(os2_alloc(bytes));
+ return(result);
+}
+
+# endif /* OS2 */
+
+
+# if defined(MSWIN32) || defined(MSWINCE)
+SYSTEM_INFO GC_sysinfo;
+# endif
+
+# ifdef MSWIN32
+
+# ifdef USE_GLOBAL_ALLOC
+# define GLOBAL_ALLOC_TEST 1
+# else
+# define GLOBAL_ALLOC_TEST GC_no_win32_dlls
+# endif
+
+word GC_n_heap_bases = 0;
+
+ptr_t GC_win32_get_mem(bytes)
+word bytes;
+{
+ ptr_t result;
+
+ if (GLOBAL_ALLOC_TEST) {
+ /* VirtualAlloc doesn't like PAGE_EXECUTE_READWRITE. */
+ /* There are also unconfirmed rumors of other */
+ /* problems, so we dodge the issue. */
+ result = (ptr_t) GlobalAlloc(0, bytes + HBLKSIZE);
+ result = (ptr_t)(((word)result + HBLKSIZE) & ~(HBLKSIZE-1));
+ } else {
+ /* VirtualProtect only works on regions returned by a */
+ /* single VirtualAlloc call. Thus we allocate one */
+ /* extra page, which will prevent merging of blocks */
+ /* in separate regions, and eliminate any temptation */
+ /* to call VirtualProtect on a range spanning regions. */
+ /* This wastes a small amount of memory, and risks */
+ /* increased fragmentation. But better alternatives */
+ /* would require effort. */
+ result = (ptr_t) VirtualAlloc(NULL, bytes + 1,
+ MEM_COMMIT | MEM_RESERVE,
+ PAGE_EXECUTE_READWRITE);
+ }
+ if (HBLKDISPL(result) != 0) ABORT("Bad VirtualAlloc result");
+ /* If I read the documentation correctly, this can */
+ /* only happen if HBLKSIZE > 64k or not a power of 2. */
+ if (GC_n_heap_bases >= MAX_HEAP_SECTS) ABORT("Too many heap sections");
+ GC_heap_bases[GC_n_heap_bases++] = result;
+ return(result);
+}
+
+void GC_win32_free_heap ()
+{
+ if (GC_no_win32_dlls) {
+ while (GC_n_heap_bases > 0) {
+ GlobalFree (GC_heap_bases[--GC_n_heap_bases]);
+ GC_heap_bases[GC_n_heap_bases] = 0;
+ }
+ }
+}
+# endif
+
+#ifdef AMIGA
+# define GC_AMIGA_AM
+# include "AmigaOS.c"
+# undef GC_AMIGA_AM
+#endif
+
+
+# ifdef MSWINCE
+word GC_n_heap_bases = 0;
+
+ptr_t GC_wince_get_mem(bytes)
+word bytes;
+{
+ ptr_t result;
+ word i;
+
+ /* Round up allocation size to multiple of page size */
+ bytes = (bytes + GC_page_size-1) & ~(GC_page_size-1);
+
+ /* Try to find reserved, uncommitted pages */
+ for (i = 0; i < GC_n_heap_bases; i++) {
+ if (((word)(-(signed_word)GC_heap_lengths[i])
+ & (GC_sysinfo.dwAllocationGranularity-1))
+ >= bytes) {
+ result = GC_heap_bases[i] + GC_heap_lengths[i];
+ break;
+ }
+ }
+
+ if (i == GC_n_heap_bases) {
+ /* Reserve more pages */
+ word res_bytes = (bytes + GC_sysinfo.dwAllocationGranularity-1)
+ & ~(GC_sysinfo.dwAllocationGranularity-1);
+ /* If we ever support MPROTECT_VDB here, we will probably need to */
+ /* ensure that res_bytes is strictly > bytes, so that VirtualProtect */
+ /* never spans regions. It seems to be OK for a VirtualFree argument */
+ /* to span regions, so we should be OK for now. */
+ result = (ptr_t) VirtualAlloc(NULL, res_bytes,
+ MEM_RESERVE | MEM_TOP_DOWN,
+ PAGE_EXECUTE_READWRITE);
+ if (HBLKDISPL(result) != 0) ABORT("Bad VirtualAlloc result");
+ /* If I read the documentation correctly, this can */
+ /* only happen if HBLKSIZE > 64k or not a power of 2. */
+ if (GC_n_heap_bases >= MAX_HEAP_SECTS) ABORT("Too many heap sections");
+ GC_heap_bases[GC_n_heap_bases] = result;
+ GC_heap_lengths[GC_n_heap_bases] = 0;
+ GC_n_heap_bases++;
+ }
+
+ /* Commit pages */
+ result = (ptr_t) VirtualAlloc(result, bytes,
+ MEM_COMMIT,
+ PAGE_EXECUTE_READWRITE);
+ if (result != NULL) {
+ if (HBLKDISPL(result) != 0) ABORT("Bad VirtualAlloc result");
+ GC_heap_lengths[i] += bytes;
+ }
+
+ return(result);
+}
+# endif
+
+#ifdef USE_MUNMAP
+
+/* For now, this only works on Win32/WinCE and some Unix-like */
+/* systems. If you have something else, don't define */
+/* USE_MUNMAP. */
+/* We assume ANSI C to support this feature. */
+
+#if !defined(MSWIN32) && !defined(MSWINCE)
+
+#include <unistd.h>
+#include <sys/mman.h>
+#include <sys/stat.h>
+#include <sys/types.h>
+
+#endif
+
+/* Compute a page aligned starting address for the unmap */
+/* operation on a block of size bytes starting at start. */
+/* Return 0 if the block is too small to make this feasible. */
+ptr_t GC_unmap_start(ptr_t start, word bytes)
+{
+ ptr_t result = start;
+ /* Round start to next page boundary. */
+ result += GC_page_size - 1;
+ result = (ptr_t)((word)result & ~(GC_page_size - 1));
+ if (result + GC_page_size > start + bytes) return 0;
+ return result;
+}
+
+/* Compute end address for an unmap operation on the indicated */
+/* block. */
+ptr_t GC_unmap_end(ptr_t start, word bytes)
+{
+ ptr_t end_addr = start + bytes;
+ end_addr = (ptr_t)((word)end_addr & ~(GC_page_size - 1));
+ return end_addr;
+}
+
+/* Under Win32/WinCE we commit (map) and decommit (unmap) */
+/* memory using VirtualAlloc and VirtualFree. These functions */
+/* work on individual allocations of virtual memory, made */
+/* previously using VirtualAlloc with the MEM_RESERVE flag. */
+/* The ranges we need to (de)commit may span several of these */
+/* allocations; therefore we use VirtualQuery to check */
+/* allocation lengths, and split up the range as necessary. */
+
+/* We assume that GC_remap is called on exactly the same range */
+/* as a previous call to GC_unmap. It is safe to consistently */
+/* round the endpoints in both places. */
+void GC_unmap(ptr_t start, word bytes)
+{
+ ptr_t start_addr = GC_unmap_start(start, bytes);
+ ptr_t end_addr = GC_unmap_end(start, bytes);
+ word len = end_addr - start_addr;
+ if (0 == start_addr) return;
+# if defined(MSWIN32) || defined(MSWINCE)
+ while (len != 0) {
+ MEMORY_BASIC_INFORMATION mem_info;
+ GC_word free_len;
+ if (VirtualQuery(start_addr, &mem_info, sizeof(mem_info))
+ != sizeof(mem_info))
+ ABORT("Weird VirtualQuery result");
+ free_len = (len < mem_info.RegionSize) ? len : mem_info.RegionSize;
+ if (!VirtualFree(start_addr, free_len, MEM_DECOMMIT))
+ ABORT("VirtualFree failed");
+ GC_unmapped_bytes += free_len;
+ start_addr += free_len;
+ len -= free_len;
+ }
+# else
+ /* We immediately remap it to prevent an intervening mmap from */
+ /* accidentally grabbing the same address space. */
+ {
+ void * result;
+ result = mmap(start_addr, len, PROT_NONE,
+ MAP_PRIVATE | MAP_FIXED | OPT_MAP_ANON,
+ zero_fd, 0/* offset */);
+ if (result != (void *)start_addr) ABORT("mmap(...PROT_NONE...) failed");
+ }
+ GC_unmapped_bytes += len;
+# endif
+}
+
+
+void GC_remap(ptr_t start, word bytes)
+{
+ ptr_t start_addr = GC_unmap_start(start, bytes);
+ ptr_t end_addr = GC_unmap_end(start, bytes);
+ word len = end_addr - start_addr;
+
+# if defined(MSWIN32) || defined(MSWINCE)
+ ptr_t result;
+
+ if (0 == start_addr) return;
+ while (len != 0) {
+ MEMORY_BASIC_INFORMATION mem_info;
+ GC_word alloc_len;
+ if (VirtualQuery(start_addr, &mem_info, sizeof(mem_info))
+ != sizeof(mem_info))
+ ABORT("Weird VirtualQuery result");
+ alloc_len = (len < mem_info.RegionSize) ? len : mem_info.RegionSize;
+ result = VirtualAlloc(start_addr, alloc_len,
+ MEM_COMMIT,
+ PAGE_EXECUTE_READWRITE);
+ if (result != start_addr) {
+ ABORT("VirtualAlloc remapping failed");
+ }
+ GC_unmapped_bytes -= alloc_len;
+ start_addr += alloc_len;
+ len -= alloc_len;
+ }
+# else
+ /* It was already remapped with PROT_NONE. */
+ int result;
+
+ if (0 == start_addr) return;
+ result = mprotect(start_addr, len,
+ PROT_READ | PROT_WRITE | OPT_PROT_EXEC);
+ if (result != 0) {
+ GC_err_printf3(
+ "Mprotect failed at 0x%lx (length %ld) with errno %ld\n",
+ start_addr, len, errno);
+ ABORT("Mprotect remapping failed");
+ }
+ GC_unmapped_bytes -= len;
+# endif
+}
+
+/* Two adjacent blocks have already been unmapped and are about to */
+/* be merged. Unmap the whole block. This typically requires */
+/* that we unmap a small section in the middle that was not previously */
+/* unmapped due to alignment constraints. */
+void GC_unmap_gap(ptr_t start1, word bytes1, ptr_t start2, word bytes2)
+{
+ ptr_t start1_addr = GC_unmap_start(start1, bytes1);
+ ptr_t end1_addr = GC_unmap_end(start1, bytes1);
+ ptr_t start2_addr = GC_unmap_start(start2, bytes2);
+ ptr_t end2_addr = GC_unmap_end(start2, bytes2);
+ ptr_t start_addr = end1_addr;
+ ptr_t end_addr = start2_addr;
+ word len;
+ GC_ASSERT(start1 + bytes1 == start2);
+ if (0 == start1_addr) start_addr = GC_unmap_start(start1, bytes1 + bytes2);
+ if (0 == start2_addr) end_addr = GC_unmap_end(start1, bytes1 + bytes2);
+ if (0 == start_addr) return;
+ len = end_addr - start_addr;
+# if defined(MSWIN32) || defined(MSWINCE)
+ while (len != 0) {
+ MEMORY_BASIC_INFORMATION mem_info;
+ GC_word free_len;
+ if (VirtualQuery(start_addr, &mem_info, sizeof(mem_info))
+ != sizeof(mem_info))
+ ABORT("Weird VirtualQuery result");
+ free_len = (len < mem_info.RegionSize) ? len : mem_info.RegionSize;
+ if (!VirtualFree(start_addr, free_len, MEM_DECOMMIT))
+ ABORT("VirtualFree failed");
+ GC_unmapped_bytes += free_len;
+ start_addr += free_len;
+ len -= free_len;
+ }
+# else
+ if (len != 0 && munmap(start_addr, len) != 0) ABORT("munmap failed");
+ GC_unmapped_bytes += len;
+# endif
+}
+
+#endif /* USE_MUNMAP */
+
+/* Routine for pushing any additional roots. In THREADS */
+/* environment, this is also responsible for marking from */
+/* thread stacks. */
+#ifndef THREADS
+void (*GC_push_other_roots)() = 0;
+#else /* THREADS */
+
+# ifdef PCR
+PCR_ERes GC_push_thread_stack(PCR_Th_T *t, PCR_Any dummy)
+{
+ struct PCR_ThCtl_TInfoRep info;
+ PCR_ERes result;
+
+ info.ti_stkLow = info.ti_stkHi = 0;
+ result = PCR_ThCtl_GetInfo(t, &info);
+ GC_push_all_stack((ptr_t)(info.ti_stkLow), (ptr_t)(info.ti_stkHi));
+ return(result);
+}
+
+/* Push the contents of an old object. We treat this as stack */
+/* data only becasue that makes it robust against mark stack */
+/* overflow. */
+PCR_ERes GC_push_old_obj(void *p, size_t size, PCR_Any data)
+{
+ GC_push_all_stack((ptr_t)p, (ptr_t)p + size);
+ return(PCR_ERes_okay);
+}
+
+
+void GC_default_push_other_roots GC_PROTO((void))
+{
+ /* Traverse data allocated by previous memory managers. */
+ {
+ extern struct PCR_MM_ProcsRep * GC_old_allocator;
+
+ if ((*(GC_old_allocator->mmp_enumerate))(PCR_Bool_false,
+ GC_push_old_obj, 0)
+ != PCR_ERes_okay) {
+ ABORT("Old object enumeration failed");
+ }
+ }
+ /* Traverse all thread stacks. */
+ if (PCR_ERes_IsErr(
+ PCR_ThCtl_ApplyToAllOtherThreads(GC_push_thread_stack,0))
+ || PCR_ERes_IsErr(GC_push_thread_stack(PCR_Th_CurrThread(), 0))) {
+ ABORT("Thread stack marking failed\n");
+ }
+}
+
+# endif /* PCR */
+
+# ifdef SRC_M3
+
+# ifdef ALL_INTERIOR_POINTERS
+ --> misconfigured
+# endif
+
+void GC_push_thread_structures GC_PROTO((void))
+{
+ /* Not our responsibibility. */
+}
+
+extern void ThreadF__ProcessStacks();
+
+void GC_push_thread_stack(start, stop)
+word start, stop;
+{
+ GC_push_all_stack((ptr_t)start, (ptr_t)stop + sizeof(word));
+}
+
+/* Push routine with M3 specific calling convention. */
+GC_m3_push_root(dummy1, p, dummy2, dummy3)
+word *p;
+ptr_t dummy1, dummy2;
+int dummy3;
+{
+ word q = *p;
+
+ GC_PUSH_ONE_STACK(q, p);
+}
+
+/* M3 set equivalent to RTHeap.TracedRefTypes */
+typedef struct { int elts[1]; } RefTypeSet;
+RefTypeSet GC_TracedRefTypes = {{0x1}};
+
+void GC_default_push_other_roots GC_PROTO((void))
+{
+ /* Use the M3 provided routine for finding static roots. */
+ /* This is a bit dubious, since it presumes no C roots. */
+ /* We handle the collector roots explicitly in GC_push_roots */
+ RTMain__GlobalMapProc(GC_m3_push_root, 0, GC_TracedRefTypes);
+ if (GC_words_allocd > 0) {
+ ThreadF__ProcessStacks(GC_push_thread_stack);
+ }
+ /* Otherwise this isn't absolutely necessary, and we have */
+ /* startup ordering problems. */
+}
+
+# endif /* SRC_M3 */
+
+# if defined(GC_SOLARIS_THREADS) || defined(GC_PTHREADS) || \
+ defined(GC_WIN32_THREADS)
+
+extern void GC_push_all_stacks();
+
+void GC_default_push_other_roots GC_PROTO((void))
+{
+ GC_push_all_stacks();
+}
+
+# endif /* GC_SOLARIS_THREADS || GC_PTHREADS */
+
+void (*GC_push_other_roots) GC_PROTO((void)) = GC_default_push_other_roots;
+
+#endif /* THREADS */
+
+/*
+ * Routines for accessing dirty bits on virtual pages.
+ * We plan to eventually implement four strategies for doing so:
+ * DEFAULT_VDB: A simple dummy implementation that treats every page
+ * as possibly dirty. This makes incremental collection
+ * useless, but the implementation is still correct.
+ * PCR_VDB: Use PPCRs virtual dirty bit facility.
+ * PROC_VDB: Use the /proc facility for reading dirty bits. Only
+ * works under some SVR4 variants. Even then, it may be
+ * too slow to be entirely satisfactory. Requires reading
+ * dirty bits for entire address space. Implementations tend
+ * to assume that the client is a (slow) debugger.
+ * MPROTECT_VDB:Protect pages and then catch the faults to keep track of
+ * dirtied pages. The implementation (and implementability)
+ * is highly system dependent. This usually fails when system
+ * calls write to a protected page. We prevent the read system
+ * call from doing so. It is the clients responsibility to
+ * make sure that other system calls are similarly protected
+ * or write only to the stack.
+ */
+GC_bool GC_dirty_maintained = FALSE;
+
+# ifdef DEFAULT_VDB
+
+/* All of the following assume the allocation lock is held, and */
+/* signals are disabled. */
+
+/* The client asserts that unallocated pages in the heap are never */
+/* written. */
+
+/* Initialize virtual dirty bit implementation. */
+void GC_dirty_init()
+{
+# ifdef PRINTSTATS
+ GC_printf0("Initializing DEFAULT_VDB...\n");
+# endif
+ GC_dirty_maintained = TRUE;
+}
+
+/* Retrieve system dirty bits for heap to a local buffer. */
+/* Restore the systems notion of which pages are dirty. */
+void GC_read_dirty()
+{}
+
+/* Is the HBLKSIZE sized page at h marked dirty in the local buffer? */
+/* If the actual page size is different, this returns TRUE if any */
+/* of the pages overlapping h are dirty. This routine may err on the */
+/* side of labelling pages as dirty (and this implementation does). */
+/*ARGSUSED*/
+GC_bool GC_page_was_dirty(h)
+struct hblk *h;
+{
+ return(TRUE);
+}
+
+/*
+ * The following two routines are typically less crucial. They matter
+ * most with large dynamic libraries, or if we can't accurately identify
+ * stacks, e.g. under Solaris 2.X. Otherwise the following default
+ * versions are adequate.
+ */
+
+/* Could any valid GC heap pointer ever have been written to this page? */
+/*ARGSUSED*/
+GC_bool GC_page_was_ever_dirty(h)
+struct hblk *h;
+{
+ return(TRUE);
+}
+
+/* Reset the n pages starting at h to "was never dirty" status. */
+void GC_is_fresh(h, n)
+struct hblk *h;
+word n;
+{
+}
+
+/* A call that: */
+/* I) hints that [h, h+nblocks) is about to be written. */
+/* II) guarantees that protection is removed. */
+/* (I) may speed up some dirty bit implementations. */
+/* (II) may be essential if we need to ensure that */
+/* pointer-free system call buffers in the heap are */
+/* not protected. */
+/*ARGSUSED*/
+void GC_remove_protection(h, nblocks, is_ptrfree)
+struct hblk *h;
+word nblocks;
+GC_bool is_ptrfree;
+{
+}
+
+# endif /* DEFAULT_VDB */
+
+
+# ifdef MPROTECT_VDB
+
+/*
+ * See DEFAULT_VDB for interface descriptions.
+ */
+
+/*
+ * This implementation maintains dirty bits itself by catching write
+ * faults and keeping track of them. We assume nobody else catches
+ * SIGBUS or SIGSEGV. We assume no write faults occur in system calls.
+ * This means that clients must ensure that system calls don't write
+ * to the write-protected heap. Probably the best way to do this is to
+ * ensure that system calls write at most to POINTERFREE objects in the
+ * heap, and do even that only if we are on a platform on which those
+ * are not protected. Another alternative is to wrap system calls
+ * (see example for read below), but the current implementation holds
+ * a lock across blocking calls, making it problematic for multithreaded
+ * applications.
+ * We assume the page size is a multiple of HBLKSIZE.
+ * We prefer them to be the same. We avoid protecting POINTERFREE
+ * objects only if they are the same.
+ */
+
+# if !defined(MSWIN32) && !defined(MSWINCE) && !defined(DARWIN)
+
+# include <sys/mman.h>
+# include <signal.h>
+# include <sys/syscall.h>
+
+# define PROTECT(addr, len) \
+ if (mprotect((caddr_t)(addr), (size_t)(len), \
+ PROT_READ | OPT_PROT_EXEC) < 0) { \
+ ABORT("mprotect failed"); \
+ }
+# define UNPROTECT(addr, len) \
+ if (mprotect((caddr_t)(addr), (size_t)(len), \
+ PROT_WRITE | PROT_READ | OPT_PROT_EXEC ) < 0) { \
+ ABORT("un-mprotect failed"); \
+ }
+
+# else
+
+# ifdef DARWIN
+ /* Using vm_protect (mach syscall) over mprotect (BSD syscall) seems to
+ decrease the likelihood of some of the problems described below. */
+ #include <mach/vm_map.h>
+ static mach_port_t GC_task_self;
+ #define PROTECT(addr,len) \
+ if(vm_protect(GC_task_self,(vm_address_t)(addr),(vm_size_t)(len), \
+ FALSE,VM_PROT_READ) != KERN_SUCCESS) { \
+ ABORT("vm_portect failed"); \
+ }
+ #define UNPROTECT(addr,len) \
+ if(vm_protect(GC_task_self,(vm_address_t)(addr),(vm_size_t)(len), \
+ FALSE,VM_PROT_READ|VM_PROT_WRITE) != KERN_SUCCESS) { \
+ ABORT("vm_portect failed"); \
+ }
+# else
+
+# ifndef MSWINCE
+# include <signal.h>
+# endif
+
+ static DWORD protect_junk;
+# define PROTECT(addr, len) \
+ if (!VirtualProtect((addr), (len), PAGE_EXECUTE_READ, \
+ &protect_junk)) { \
+ DWORD last_error = GetLastError(); \
+ GC_printf1("Last error code: %lx\n", last_error); \
+ ABORT("VirtualProtect failed"); \
+ }
+# define UNPROTECT(addr, len) \
+ if (!VirtualProtect((addr), (len), PAGE_EXECUTE_READWRITE, \
+ &protect_junk)) { \
+ ABORT("un-VirtualProtect failed"); \
+ }
+# endif /* !DARWIN */
+# endif /* MSWIN32 || MSWINCE || DARWIN */
+
+#if defined(SUNOS4) || (defined(FREEBSD) && !defined(SUNOS5SIGS))
+ typedef void (* SIG_PF)();
+#endif /* SUNOS4 || (FREEBSD && !SUNOS5SIGS) */
+
+#if defined(SUNOS5SIGS) || defined(OSF1) || defined(LINUX) \
+ || defined(HURD)
+# ifdef __STDC__
+ typedef void (* SIG_PF)(int);
+# else
+ typedef void (* SIG_PF)();
+# endif
+#endif /* SUNOS5SIGS || OSF1 || LINUX || HURD */
+
+#if defined(MSWIN32)
+ typedef LPTOP_LEVEL_EXCEPTION_FILTER SIG_PF;
+# undef SIG_DFL
+# define SIG_DFL (LPTOP_LEVEL_EXCEPTION_FILTER) (-1)
+#endif
+#if defined(MSWINCE)
+ typedef LONG (WINAPI *SIG_PF)(struct _EXCEPTION_POINTERS *);
+# undef SIG_DFL
+# define SIG_DFL (SIG_PF) (-1)
+#endif
+
+#if defined(IRIX5) || defined(OSF1) || defined(HURD)
+ typedef void (* REAL_SIG_PF)(int, int, struct sigcontext *);
+#endif /* IRIX5 || OSF1 || HURD */
+
+#if defined(SUNOS5SIGS)
+# if defined(HPUX) || defined(FREEBSD)
+# define SIGINFO_T siginfo_t
+# else
+# define SIGINFO_T struct siginfo
+# endif
+# ifdef __STDC__
+ typedef void (* REAL_SIG_PF)(int, SIGINFO_T *, void *);
+# else
+ typedef void (* REAL_SIG_PF)();
+# endif
+#endif /* SUNOS5SIGS */
+
+#if defined(LINUX)
+# if __GLIBC__ > 2 || __GLIBC__ == 2 && __GLIBC_MINOR__ >= 2
+ typedef struct sigcontext s_c;
+# else /* glibc < 2.2 */
+# include <linux/version.h>
+# if (LINUX_VERSION_CODE >= 0x20100) && !defined(M68K) || defined(ALPHA) || defined(ARM32)
+ typedef struct sigcontext s_c;
+# else
+ typedef struct sigcontext_struct s_c;
+# endif
+# endif /* glibc < 2.2 */
+# if defined(ALPHA) || defined(M68K)
+ typedef void (* REAL_SIG_PF)(int, int, s_c *);
+# else
+# if defined(IA64) || defined(HP_PA) || defined(X86_64)
+ typedef void (* REAL_SIG_PF)(int, siginfo_t *, s_c *);
+ /* FIXME: */
+ /* According to SUSV3, the last argument should have type */
+ /* void * or ucontext_t * */
+# else
+ typedef void (* REAL_SIG_PF)(int, s_c);
+# endif
+# endif
+# ifdef ALPHA
+ /* Retrieve fault address from sigcontext structure by decoding */
+ /* instruction. */
+ char * get_fault_addr(s_c *sc) {
+ unsigned instr;
+ word faultaddr;
+
+ instr = *((unsigned *)(sc->sc_pc));
+ faultaddr = sc->sc_regs[(instr >> 16) & 0x1f];
+ faultaddr += (word) (((int)instr << 16) >> 16);
+ return (char *)faultaddr;
+ }
+# endif /* !ALPHA */
+# endif /* LINUX */
+
+#ifndef DARWIN
+SIG_PF GC_old_bus_handler;
+SIG_PF GC_old_segv_handler; /* Also old MSWIN32 ACCESS_VIOLATION filter */
+#endif /* !DARWIN */
+
+#if defined(THREADS)
+/* We need to lock around the bitmap update in the write fault handler */
+/* in order to avoid the risk of losing a bit. We do this with a */
+/* test-and-set spin lock if we know how to do that. Otherwise we */
+/* check whether we are already in the handler and use the dumb but */
+/* safe fallback algorithm of setting all bits in the word. */
+/* Contention should be very rare, so we do the minimum to handle it */
+/* correctly. */
+#ifdef GC_TEST_AND_SET_DEFINED
+ static VOLATILE unsigned int fault_handler_lock = 0;
+ void async_set_pht_entry_from_index(VOLATILE page_hash_table db, int index) {
+ while (GC_test_and_set(&fault_handler_lock)) {}
+ /* Could also revert to set_pht_entry_from_index_safe if initial */
+ /* GC_test_and_set fails. */
+ set_pht_entry_from_index(db, index);
+ GC_clear(&fault_handler_lock);
+ }
+#else /* !GC_TEST_AND_SET_DEFINED */
+ /* THIS IS INCORRECT! The dirty bit vector may be temporarily wrong, */
+ /* just before we notice the conflict and correct it. We may end up */
+ /* looking at it while it's wrong. But this requires contention */
+ /* exactly when a GC is triggered, which seems far less likely to */
+ /* fail than the old code, which had no reported failures. Thus we */
+ /* leave it this way while we think of something better, or support */
+ /* GC_test_and_set on the remaining platforms. */
+ static VOLATILE word currently_updating = 0;
+ void async_set_pht_entry_from_index(VOLATILE page_hash_table db, int index) {
+ unsigned int update_dummy;
+ currently_updating = (word)(&update_dummy);
+ set_pht_entry_from_index(db, index);
+ /* If we get contention in the 10 or so instruction window here, */
+ /* and we get stopped by a GC between the two updates, we lose! */
+ if (currently_updating != (word)(&update_dummy)) {
+ set_pht_entry_from_index_safe(db, index);
+ /* We claim that if two threads concurrently try to update the */
+ /* dirty bit vector, the first one to execute UPDATE_START */
+ /* will see it changed when UPDATE_END is executed. (Note that */
+ /* &update_dummy must differ in two distinct threads.) It */
+ /* will then execute set_pht_entry_from_index_safe, thus */
+ /* returning us to a safe state, though not soon enough. */
+ }
+ }
+#endif /* !GC_TEST_AND_SET_DEFINED */
+#else /* !THREADS */
+# define async_set_pht_entry_from_index(db, index) \
+ set_pht_entry_from_index(db, index)
+#endif /* !THREADS */
+
+/*ARGSUSED*/
+#if !defined(DARWIN)
+# if defined (SUNOS4) || (defined(FREEBSD) && !defined(SUNOS5SIGS))
+ void GC_write_fault_handler(sig, code, scp, addr)
+ int sig, code;
+ struct sigcontext *scp;
+ char * addr;
+# ifdef SUNOS4
+# define SIG_OK (sig == SIGSEGV || sig == SIGBUS)
+# define CODE_OK (FC_CODE(code) == FC_PROT \
+ || (FC_CODE(code) == FC_OBJERR \
+ && FC_ERRNO(code) == FC_PROT))
+# endif
+# ifdef FREEBSD
+# define SIG_OK (sig == SIGBUS)
+# define CODE_OK TRUE
+# endif
+# endif /* SUNOS4 || (FREEBSD && !SUNOS5SIGS) */
+
+# if defined(IRIX5) || defined(OSF1) || defined(HURD)
+# include <errno.h>
+ void GC_write_fault_handler(int sig, int code, struct sigcontext *scp)
+# ifdef OSF1
+# define SIG_OK (sig == SIGSEGV)
+# define CODE_OK (code == 2 /* experimentally determined */)
+# endif
+# ifdef IRIX5
+# define SIG_OK (sig == SIGSEGV)
+# define CODE_OK (code == EACCES)
+# endif
+# ifdef HURD
+# define SIG_OK (sig == SIGBUS || sig == SIGSEGV)
+# define CODE_OK TRUE
+# endif
+# endif /* IRIX5 || OSF1 || HURD */
+
+# if defined(LINUX)
+# if defined(ALPHA) || defined(M68K)
+ void GC_write_fault_handler(int sig, int code, s_c * sc)
+# else
+# if defined(IA64) || defined(HP_PA) || defined(X86_64)
+ void GC_write_fault_handler(int sig, siginfo_t * si, s_c * scp)
+# else
+# if defined(ARM32)
+ void GC_write_fault_handler(int sig, int a2, int a3, int a4, s_c sc)
+# else
+ void GC_write_fault_handler(int sig, s_c sc)
+# endif
+# endif
+# endif
+# define SIG_OK (sig == SIGSEGV)
+# define CODE_OK TRUE
+ /* Empirically c.trapno == 14, on IA32, but is that useful? */
+ /* Should probably consider alignment issues on other */
+ /* architectures. */
+# endif /* LINUX */
+
+# if defined(SUNOS5SIGS)
+# ifdef __STDC__
+ void GC_write_fault_handler(int sig, SIGINFO_T *scp, void * context)
+# else
+ void GC_write_fault_handler(sig, scp, context)
+ int sig;
+ SIGINFO_T *scp;
+ void * context;
+# endif
+# ifdef HPUX
+# define SIG_OK (sig == SIGSEGV || sig == SIGBUS)
+# define CODE_OK (scp -> si_code == SEGV_ACCERR) \
+ || (scp -> si_code == BUS_ADRERR) \
+ || (scp -> si_code == BUS_UNKNOWN) \
+ || (scp -> si_code == SEGV_UNKNOWN) \
+ || (scp -> si_code == BUS_OBJERR)
+# else
+# ifdef FREEBSD
+# define SIG_OK (sig == SIGBUS)
+# define CODE_OK (scp -> si_code == BUS_PAGE_FAULT)
+# else
+# define SIG_OK (sig == SIGSEGV)
+# define CODE_OK (scp -> si_code == SEGV_ACCERR)
+# endif
+# endif
+# endif /* SUNOS5SIGS */
+
+# if defined(MSWIN32) || defined(MSWINCE)
+ LONG WINAPI GC_write_fault_handler(struct _EXCEPTION_POINTERS *exc_info)
+# define SIG_OK (exc_info -> ExceptionRecord -> ExceptionCode == \
+ STATUS_ACCESS_VIOLATION)
+# define CODE_OK (exc_info -> ExceptionRecord -> ExceptionInformation[0] == 1)
+ /* Write fault */
+# endif /* MSWIN32 || MSWINCE */
+{
+ register unsigned i;
+# if defined(HURD)
+ char *addr = (char *) code;
+# endif
+# ifdef IRIX5
+ char * addr = (char *) (size_t) (scp -> sc_badvaddr);
+# endif
+# if defined(OSF1) && defined(ALPHA)
+ char * addr = (char *) (scp -> sc_traparg_a0);
+# endif
+# ifdef SUNOS5SIGS
+ char * addr = (char *) (scp -> si_addr);
+# endif
+# ifdef LINUX
+# if defined(I386)
+ char * addr = (char *) (sc.cr2);
+# else
+# if defined(M68K)
+ char * addr = NULL;
+
+ struct sigcontext *scp = (struct sigcontext *)(sc);
+
+ int format = (scp->sc_formatvec >> 12) & 0xf;
+ unsigned long *framedata = (unsigned long *)(scp + 1);
+ unsigned long ea;
+
+ if (format == 0xa || format == 0xb) {
+ /* 68020/030 */
+ ea = framedata[2];
+ } else if (format == 7) {
+ /* 68040 */
+ ea = framedata[3];
+ if (framedata[1] & 0x08000000) {
+ /* correct addr on misaligned access */
+ ea = (ea+4095)&(~4095);
+ }
+ } else if (format == 4) {
+ /* 68060 */
+ ea = framedata[0];
+ if (framedata[1] & 0x08000000) {
+ /* correct addr on misaligned access */
+ ea = (ea+4095)&(~4095);
+ }
+ }
+ addr = (char *)ea;
+# else
+# ifdef ALPHA
+ char * addr = get_fault_addr(sc);
+# else
+# if defined(IA64) || defined(HP_PA) || defined(X86_64)
+ char * addr = si -> si_addr;
+ /* I believe this is claimed to work on all platforms for */
+ /* Linux 2.3.47 and later. Hopefully we don't have to */
+ /* worry about earlier kernels on IA64. */
+# else
+# if defined(POWERPC)
+ char * addr = (char *) (sc.regs->dar);
+# else
+# if defined(ARM32)
+ char * addr = (char *)sc.fault_address;
+# else
+# if defined(CRIS)
+ char * addr = (char *)sc.regs.csraddr;
+# else
+ --> architecture not supported
+# endif
+# endif
+# endif
+# endif
+# endif
+# endif
+# endif
+# endif
+# if defined(MSWIN32) || defined(MSWINCE)
+ char * addr = (char *) (exc_info -> ExceptionRecord
+ -> ExceptionInformation[1]);
+# define sig SIGSEGV
+# endif
+
+ if (SIG_OK && CODE_OK) {
+ register struct hblk * h =
+ (struct hblk *)((word)addr & ~(GC_page_size-1));
+ GC_bool in_allocd_block;
+
+# ifdef SUNOS5SIGS
+ /* Address is only within the correct physical page. */
+ in_allocd_block = FALSE;
+ for (i = 0; i < divHBLKSZ(GC_page_size); i++) {
+ if (HDR(h+i) != 0) {
+ in_allocd_block = TRUE;
+ }
+ }
+# else
+ in_allocd_block = (HDR(addr) != 0);
+# endif
+ if (!in_allocd_block) {
+ /* FIXME - We should make sure that we invoke the */
+ /* old handler with the appropriate calling */
+ /* sequence, which often depends on SA_SIGINFO. */
+
+ /* Heap blocks now begin and end on page boundaries */
+ SIG_PF old_handler;
+
+ if (sig == SIGSEGV) {
+ old_handler = GC_old_segv_handler;
+ } else {
+ old_handler = GC_old_bus_handler;
+ }
+ if (old_handler == SIG_DFL) {
+# if !defined(MSWIN32) && !defined(MSWINCE)
+ GC_err_printf1("Segfault at 0x%lx\n", addr);
+ ABORT("Unexpected bus error or segmentation fault");
+# else
+ return(EXCEPTION_CONTINUE_SEARCH);
+# endif
+ } else {
+# if defined (SUNOS4) \
+ || (defined(FREEBSD) && !defined(SUNOS5SIGS))
+ (*old_handler) (sig, code, scp, addr);
+ return;
+# endif
+# if defined (SUNOS5SIGS)
+ /*
+ * FIXME: For FreeBSD, this code should check if the
+ * old signal handler used the traditional BSD style and
+ * if so call it using that style.
+ */
+ (*(REAL_SIG_PF)old_handler) (sig, scp, context);
+ return;
+# endif
+# if defined (LINUX)
+# if defined(ALPHA) || defined(M68K)
+ (*(REAL_SIG_PF)old_handler) (sig, code, sc);
+# else
+# if defined(IA64) || defined(HP_PA) || defined(X86_64)
+ (*(REAL_SIG_PF)old_handler) (sig, si, scp);
+# else
+ (*(REAL_SIG_PF)old_handler) (sig, sc);
+# endif
+# endif
+ return;
+# endif
+# if defined (IRIX5) || defined(OSF1) || defined(HURD)
+ (*(REAL_SIG_PF)old_handler) (sig, code, scp);
+ return;
+# endif
+# ifdef MSWIN32
+ return((*old_handler)(exc_info));
+# endif
+ }
+ }
+ UNPROTECT(h, GC_page_size);
+ /* We need to make sure that no collection occurs between */
+ /* the UNPROTECT and the setting of the dirty bit. Otherwise */
+ /* a write by a third thread might go unnoticed. Reversing */
+ /* the order is just as bad, since we would end up unprotecting */
+ /* a page in a GC cycle during which it's not marked. */
+ /* Currently we do this by disabling the thread stopping */
+ /* signals while this handler is running. An alternative might */
+ /* be to record the fact that we're about to unprotect, or */
+ /* have just unprotected a page in the GC's thread structure, */
+ /* and then to have the thread stopping code set the dirty */
+ /* flag, if necessary. */
+ for (i = 0; i < divHBLKSZ(GC_page_size); i++) {
+ register int index = PHT_HASH(h+i);
+
+ async_set_pht_entry_from_index(GC_dirty_pages, index);
+ }
+# if defined(OSF1)
+ /* These reset the signal handler each time by default. */
+ signal(SIGSEGV, (SIG_PF) GC_write_fault_handler);
+# endif
+ /* The write may not take place before dirty bits are read. */
+ /* But then we'll fault again ... */
+# if defined(MSWIN32) || defined(MSWINCE)
+ return(EXCEPTION_CONTINUE_EXECUTION);
+# else
+ return;
+# endif
+ }
+#if defined(MSWIN32) || defined(MSWINCE)
+ return EXCEPTION_CONTINUE_SEARCH;
+#else
+ GC_err_printf1("Segfault at 0x%lx\n", addr);
+ ABORT("Unexpected bus error or segmentation fault");
+#endif
+}
+#endif /* !DARWIN */
+
+/*
+ * We hold the allocation lock. We expect block h to be written
+ * shortly. Ensure that all pages containing any part of the n hblks
+ * starting at h are no longer protected. If is_ptrfree is false,
+ * also ensure that they will subsequently appear to be dirty.
+ */
+void GC_remove_protection(h, nblocks, is_ptrfree)
+struct hblk *h;
+word nblocks;
+GC_bool is_ptrfree;
+{
+ struct hblk * h_trunc; /* Truncated to page boundary */
+ struct hblk * h_end; /* Page boundary following block end */
+ struct hblk * current;
+ GC_bool found_clean;
+
+ if (!GC_dirty_maintained) return;
+ h_trunc = (struct hblk *)((word)h & ~(GC_page_size-1));
+ h_end = (struct hblk *)(((word)(h + nblocks) + GC_page_size-1)
+ & ~(GC_page_size-1));
+ found_clean = FALSE;
+ for (current = h_trunc; current < h_end; ++current) {
+ int index = PHT_HASH(current);
+
+ if (!is_ptrfree || current < h || current >= h + nblocks) {
+ async_set_pht_entry_from_index(GC_dirty_pages, index);
+ }
+ }
+ UNPROTECT(h_trunc, (ptr_t)h_end - (ptr_t)h_trunc);
+}
+
+#if !defined(DARWIN)
+void GC_dirty_init()
+{
+# if defined(SUNOS5SIGS) || defined(IRIX5) || defined(LINUX) || \
+ defined(OSF1) || defined(HURD)
+ struct sigaction act, oldact;
+ /* We should probably specify SA_SIGINFO for Linux, and handle */
+ /* the different architectures more uniformly. */
+# if defined(IRIX5) || defined(LINUX) && !defined(X86_64) \
+ || defined(OSF1) || defined(HURD)
+ act.sa_flags = SA_RESTART;
+ act.sa_handler = (SIG_PF)GC_write_fault_handler;
+# else
+ act.sa_flags = SA_RESTART | SA_SIGINFO;
+ act.sa_sigaction = GC_write_fault_handler;
+# endif
+ (void)sigemptyset(&act.sa_mask);
+# ifdef SIG_SUSPEND
+ /* Arrange to postpone SIG_SUSPEND while we're in a write fault */
+ /* handler. This effectively makes the handler atomic w.r.t. */
+ /* stopping the world for GC. */
+ (void)sigaddset(&act.sa_mask, SIG_SUSPEND);
+# endif /* SIG_SUSPEND */
+# endif
+# ifdef PRINTSTATS
+ GC_printf0("Inititalizing mprotect virtual dirty bit implementation\n");
+# endif
+ GC_dirty_maintained = TRUE;
+ if (GC_page_size % HBLKSIZE != 0) {
+ GC_err_printf0("Page size not multiple of HBLKSIZE\n");
+ ABORT("Page size not multiple of HBLKSIZE");
+ }
+# if defined(SUNOS4) || (defined(FREEBSD) && !defined(SUNOS5SIGS))
+ GC_old_bus_handler = signal(SIGBUS, GC_write_fault_handler);
+ if (GC_old_bus_handler == SIG_IGN) {
+ GC_err_printf0("Previously ignored bus error!?");
+ GC_old_bus_handler = SIG_DFL;
+ }
+ if (GC_old_bus_handler != SIG_DFL) {
+# ifdef PRINTSTATS
+ GC_err_printf0("Replaced other SIGBUS handler\n");
+# endif
+ }
+# endif
+# if defined(SUNOS4)
+ GC_old_segv_handler = signal(SIGSEGV, (SIG_PF)GC_write_fault_handler);
+ if (GC_old_segv_handler == SIG_IGN) {
+ GC_err_printf0("Previously ignored segmentation violation!?");
+ GC_old_segv_handler = SIG_DFL;
+ }
+ if (GC_old_segv_handler != SIG_DFL) {
+# ifdef PRINTSTATS
+ GC_err_printf0("Replaced other SIGSEGV handler\n");
+# endif
+ }
+# endif
+# if (defined(SUNOS5SIGS) && !defined(FREEBSD)) || defined(IRIX5) \
+ || defined(LINUX) || defined(OSF1) || defined(HURD)
+ /* SUNOS5SIGS includes HPUX */
+# if defined(GC_IRIX_THREADS)
+ sigaction(SIGSEGV, 0, &oldact);
+ sigaction(SIGSEGV, &act, 0);
+# else
+ {
+ int res = sigaction(SIGSEGV, &act, &oldact);
+ if (res != 0) ABORT("Sigaction failed");
+ }
+# endif
+# if defined(_sigargs) || defined(HURD) || !defined(SA_SIGINFO)
+ /* This is Irix 5.x, not 6.x. Irix 5.x does not have */
+ /* sa_sigaction. */
+ GC_old_segv_handler = oldact.sa_handler;
+# else /* Irix 6.x or SUNOS5SIGS or LINUX */
+ if (oldact.sa_flags & SA_SIGINFO) {
+ GC_old_segv_handler = (SIG_PF)(oldact.sa_sigaction);
+ } else {
+ GC_old_segv_handler = oldact.sa_handler;
+ }
+# endif
+ if (GC_old_segv_handler == SIG_IGN) {
+ GC_err_printf0("Previously ignored segmentation violation!?");
+ GC_old_segv_handler = SIG_DFL;
+ }
+ if (GC_old_segv_handler != SIG_DFL) {
+# ifdef PRINTSTATS
+ GC_err_printf0("Replaced other SIGSEGV handler\n");
+# endif
+ }
+# endif /* (SUNOS5SIGS && !FREEBSD) || IRIX5 || LINUX || OSF1 || HURD */
+# if defined(HPUX) || defined(LINUX) || defined(HURD) \
+ || (defined(FREEBSD) && defined(SUNOS5SIGS))
+ sigaction(SIGBUS, &act, &oldact);
+ GC_old_bus_handler = oldact.sa_handler;
+ if (GC_old_bus_handler == SIG_IGN) {
+ GC_err_printf0("Previously ignored bus error!?");
+ GC_old_bus_handler = SIG_DFL;
+ }
+ if (GC_old_bus_handler != SIG_DFL) {
+# ifdef PRINTSTATS
+ GC_err_printf0("Replaced other SIGBUS handler\n");
+# endif
+ }
+# endif /* HPUX || LINUX || HURD || (FREEBSD && SUNOS5SIGS) */
+# if defined(MSWIN32)
+ GC_old_segv_handler = SetUnhandledExceptionFilter(GC_write_fault_handler);
+ if (GC_old_segv_handler != NULL) {
+# ifdef PRINTSTATS
+ GC_err_printf0("Replaced other UnhandledExceptionFilter\n");
+# endif
+ } else {
+ GC_old_segv_handler = SIG_DFL;
+ }
+# endif
+}
+#endif /* !DARWIN */
+
+int GC_incremental_protection_needs()
+{
+ if (GC_page_size == HBLKSIZE) {
+ return GC_PROTECTS_POINTER_HEAP;
+ } else {
+ return GC_PROTECTS_POINTER_HEAP | GC_PROTECTS_PTRFREE_HEAP;
+ }
+}
+
+#define HAVE_INCREMENTAL_PROTECTION_NEEDS
+
+#define IS_PTRFREE(hhdr) ((hhdr)->hb_descr == 0)
+
+#define PAGE_ALIGNED(x) !((word)(x) & (GC_page_size - 1))
+void GC_protect_heap()
+{
+ ptr_t start;
+ word len;
+ struct hblk * current;
+ struct hblk * current_start; /* Start of block to be protected. */
+ struct hblk * limit;
+ unsigned i;
+ GC_bool protect_all =
+ (0 != (GC_incremental_protection_needs() & GC_PROTECTS_PTRFREE_HEAP));
+ for (i = 0; i < GC_n_heap_sects; i++) {
+ start = GC_heap_sects[i].hs_start;
+ len = GC_heap_sects[i].hs_bytes;
+ if (protect_all) {
+ PROTECT(start, len);
+ } else {
+ GC_ASSERT(PAGE_ALIGNED(len))
+ GC_ASSERT(PAGE_ALIGNED(start))
+ current_start = current = (struct hblk *)start;
+ limit = (struct hblk *)(start + len);
+ while (current < limit) {
+ hdr * hhdr;
+ word nhblks;
+ GC_bool is_ptrfree;
+
+ GC_ASSERT(PAGE_ALIGNED(current));
+ GET_HDR(current, hhdr);
+ if (IS_FORWARDING_ADDR_OR_NIL(hhdr)) {
+ /* This can happen only if we're at the beginning of a */
+ /* heap segment, and a block spans heap segments. */
+ /* We will handle that block as part of the preceding */
+ /* segment. */
+ GC_ASSERT(current_start == current);
+ current_start = ++current;
+ continue;
+ }
+ if (HBLK_IS_FREE(hhdr)) {
+ GC_ASSERT(PAGE_ALIGNED(hhdr -> hb_sz));
+ nhblks = divHBLKSZ(hhdr -> hb_sz);
+ is_ptrfree = TRUE; /* dirty on alloc */
+ } else {
+ nhblks = OBJ_SZ_TO_BLOCKS(hhdr -> hb_sz);
+ is_ptrfree = IS_PTRFREE(hhdr);
+ }
+ if (is_ptrfree) {
+ if (current_start < current) {
+ PROTECT(current_start, (ptr_t)current - (ptr_t)current_start);
+ }
+ current_start = (current += nhblks);
+ } else {
+ current += nhblks;
+ }
+ }
+ if (current_start < current) {
+ PROTECT(current_start, (ptr_t)current - (ptr_t)current_start);
+ }
+ }
+ }
+}
+
+/* We assume that either the world is stopped or its OK to lose dirty */
+/* bits while this is happenning (as in GC_enable_incremental). */
+void GC_read_dirty()
+{
+ BCOPY((word *)GC_dirty_pages, GC_grungy_pages,
+ (sizeof GC_dirty_pages));
+ BZERO((word *)GC_dirty_pages, (sizeof GC_dirty_pages));
+ GC_protect_heap();
+}
+
+GC_bool GC_page_was_dirty(h)
+struct hblk * h;
+{
+ register word index = PHT_HASH(h);
+
+ return(HDR(h) == 0 || get_pht_entry_from_index(GC_grungy_pages, index));
+}
+
+/*
+ * Acquiring the allocation lock here is dangerous, since this
+ * can be called from within GC_call_with_alloc_lock, and the cord
+ * package does so. On systems that allow nested lock acquisition, this
+ * happens to work.
+ * On other systems, SET_LOCK_HOLDER and friends must be suitably defined.
+ */
+
+static GC_bool syscall_acquired_lock = FALSE; /* Protected by GC lock. */
+
+void GC_begin_syscall()
+{
+ if (!I_HOLD_LOCK()) {
+ LOCK();
+ syscall_acquired_lock = TRUE;
+ }
+}
+
+void GC_end_syscall()
+{
+ if (syscall_acquired_lock) {
+ syscall_acquired_lock = FALSE;
+ UNLOCK();
+ }
+}
+
+void GC_unprotect_range(addr, len)
+ptr_t addr;
+word len;
+{
+ struct hblk * start_block;
+ struct hblk * end_block;
+ register struct hblk *h;
+ ptr_t obj_start;
+
+ if (!GC_dirty_maintained) return;
+ obj_start = GC_base(addr);
+ if (obj_start == 0) return;
+ if (GC_base(addr + len - 1) != obj_start) {
+ ABORT("GC_unprotect_range(range bigger than object)");
+ }
+ start_block = (struct hblk *)((word)addr & ~(GC_page_size - 1));
+ end_block = (struct hblk *)((word)(addr + len - 1) & ~(GC_page_size - 1));
+ end_block += GC_page_size/HBLKSIZE - 1;
+ for (h = start_block; h <= end_block; h++) {
+ register word index = PHT_HASH(h);
+
+ async_set_pht_entry_from_index(GC_dirty_pages, index);
+ }
+ UNPROTECT(start_block,
+ ((ptr_t)end_block - (ptr_t)start_block) + HBLKSIZE);
+}
+
+#if 0
+
+/* We no longer wrap read by default, since that was causing too many */
+/* problems. It is preferred that the client instead avoids writing */
+/* to the write-protected heap with a system call. */
+/* This still serves as sample code if you do want to wrap system calls.*/
+
+#if !defined(MSWIN32) && !defined(MSWINCE) && !defined(GC_USE_LD_WRAP)
+/* Replacement for UNIX system call. */
+/* Other calls that write to the heap should be handled similarly. */
+/* Note that this doesn't work well for blocking reads: It will hold */
+/* the allocation lock for the entire duration of the call. Multithreaded */
+/* clients should really ensure that it won't block, either by setting */
+/* the descriptor nonblocking, or by calling select or poll first, to */
+/* make sure that input is available. */
+/* Another, preferred alternative is to ensure that system calls never */
+/* write to the protected heap (see above). */
+# if defined(__STDC__) && !defined(SUNOS4)
+# include <unistd.h>
+# include <sys/uio.h>
+ ssize_t read(int fd, void *buf, size_t nbyte)
+# else
+# ifndef LINT
+ int read(fd, buf, nbyte)
+# else
+ int GC_read(fd, buf, nbyte)
+# endif
+ int fd;
+ char *buf;
+ int nbyte;
+# endif
+{
+ int result;
+
+ GC_begin_syscall();
+ GC_unprotect_range(buf, (word)nbyte);
+# if defined(IRIX5) || defined(GC_LINUX_THREADS)
+ /* Indirect system call may not always be easily available. */
+ /* We could call _read, but that would interfere with the */
+ /* libpthread interception of read. */
+ /* On Linux, we have to be careful with the linuxthreads */
+ /* read interception. */
+ {
+ struct iovec iov;
+
+ iov.iov_base = buf;
+ iov.iov_len = nbyte;
+ result = readv(fd, &iov, 1);
+ }
+# else
+# if defined(HURD)
+ result = __read(fd, buf, nbyte);
+# else
+ /* The two zero args at the end of this list are because one
+ IA-64 syscall() implementation actually requires six args
+ to be passed, even though they aren't always used. */
+ result = syscall(SYS_read, fd, buf, nbyte, 0, 0);
+# endif /* !HURD */
+# endif
+ GC_end_syscall();
+ return(result);
+}
+#endif /* !MSWIN32 && !MSWINCE && !GC_LINUX_THREADS */
+
+#if defined(GC_USE_LD_WRAP) && !defined(THREADS)
+ /* We use the GNU ld call wrapping facility. */
+ /* This requires that the linker be invoked with "--wrap read". */
+ /* This can be done by passing -Wl,"--wrap read" to gcc. */
+ /* I'm not sure that this actually wraps whatever version of read */
+ /* is called by stdio. That code also mentions __read. */
+# include <unistd.h>
+ ssize_t __wrap_read(int fd, void *buf, size_t nbyte)
+ {
+ int result;
+
+ GC_begin_syscall();
+ GC_unprotect_range(buf, (word)nbyte);
+ result = __real_read(fd, buf, nbyte);
+ GC_end_syscall();
+ return(result);
+ }
+
+ /* We should probably also do this for __read, or whatever stdio */
+ /* actually calls. */
+#endif
+
+#endif /* 0 */
+
+/*ARGSUSED*/
+GC_bool GC_page_was_ever_dirty(h)
+struct hblk *h;
+{
+ return(TRUE);
+}
+
+/* Reset the n pages starting at h to "was never dirty" status. */
+/*ARGSUSED*/
+void GC_is_fresh(h, n)
+struct hblk *h;
+word n;
+{
+}
+
+# endif /* MPROTECT_VDB */
+
+# ifdef PROC_VDB
+
+/*
+ * See DEFAULT_VDB for interface descriptions.
+ */
+
+/*
+ * This implementaion assumes a Solaris 2.X like /proc pseudo-file-system
+ * from which we can read page modified bits. This facility is far from
+ * optimal (e.g. we would like to get the info for only some of the
+ * address space), but it avoids intercepting system calls.
+ */
+
+#include <errno.h>
+#include <sys/types.h>
+#include <sys/signal.h>
+#include <sys/fault.h>
+#include <sys/syscall.h>
+#include <sys/procfs.h>
+#include <sys/stat.h>
+
+#define INITIAL_BUF_SZ 16384
+word GC_proc_buf_size = INITIAL_BUF_SZ;
+char *GC_proc_buf;
+
+#ifdef GC_SOLARIS_THREADS
+/* We don't have exact sp values for threads. So we count on */
+/* occasionally declaring stack pages to be fresh. Thus we */
+/* need a real implementation of GC_is_fresh. We can't clear */
+/* entries in GC_written_pages, since that would declare all */
+/* pages with the given hash address to be fresh. */
+# define MAX_FRESH_PAGES 8*1024 /* Must be power of 2 */
+ struct hblk ** GC_fresh_pages; /* A direct mapped cache. */
+ /* Collisions are dropped. */
+
+# define FRESH_PAGE_SLOT(h) (divHBLKSZ((word)(h)) & (MAX_FRESH_PAGES-1))
+# define ADD_FRESH_PAGE(h) \
+ GC_fresh_pages[FRESH_PAGE_SLOT(h)] = (h)
+# define PAGE_IS_FRESH(h) \
+ (GC_fresh_pages[FRESH_PAGE_SLOT(h)] == (h) && (h) != 0)
+#endif
+
+/* Add all pages in pht2 to pht1 */
+void GC_or_pages(pht1, pht2)
+page_hash_table pht1, pht2;
+{
+ register int i;
+
+ for (i = 0; i < PHT_SIZE; i++) pht1[i] |= pht2[i];
+}
+
+int GC_proc_fd;
+
+void GC_dirty_init()
+{
+ int fd;
+ char buf[30];
+
+ GC_dirty_maintained = TRUE;
+ if (GC_words_allocd != 0 || GC_words_allocd_before_gc != 0) {
+ register int i;
+
+ for (i = 0; i < PHT_SIZE; i++) GC_written_pages[i] = (word)(-1);
+# ifdef PRINTSTATS
+ GC_printf1("Allocated words:%lu:all pages may have been written\n",
+ (unsigned long)
+ (GC_words_allocd + GC_words_allocd_before_gc));
+# endif
+ }
+ sprintf(buf, "/proc/%d", getpid());
+ fd = open(buf, O_RDONLY);
+ if (fd < 0) {
+ ABORT("/proc open failed");
+ }
+ GC_proc_fd = syscall(SYS_ioctl, fd, PIOCOPENPD, 0);
+ close(fd);
+ syscall(SYS_fcntl, GC_proc_fd, F_SETFD, FD_CLOEXEC);
+ if (GC_proc_fd < 0) {
+ ABORT("/proc ioctl failed");
+ }
+ GC_proc_buf = GC_scratch_alloc(GC_proc_buf_size);
+# ifdef GC_SOLARIS_THREADS
+ GC_fresh_pages = (struct hblk **)
+ GC_scratch_alloc(MAX_FRESH_PAGES * sizeof (struct hblk *));
+ if (GC_fresh_pages == 0) {
+ GC_err_printf0("No space for fresh pages\n");
+ EXIT();
+ }
+ BZERO(GC_fresh_pages, MAX_FRESH_PAGES * sizeof (struct hblk *));
+# endif
+}
+
+/* Ignore write hints. They don't help us here. */
+/*ARGSUSED*/
+void GC_remove_protection(h, nblocks, is_ptrfree)
+struct hblk *h;
+word nblocks;
+GC_bool is_ptrfree;
+{
+}
+
+#ifdef GC_SOLARIS_THREADS
+# define READ(fd,buf,nbytes) syscall(SYS_read, fd, buf, nbytes)
+#else
+# define READ(fd,buf,nbytes) read(fd, buf, nbytes)
+#endif
+
+void GC_read_dirty()
+{
+ unsigned long ps, np;
+ int nmaps;
+ ptr_t vaddr;
+ struct prasmap * map;
+ char * bufp;
+ ptr_t current_addr, limit;
+ int i;
+int dummy;
+
+ BZERO(GC_grungy_pages, (sizeof GC_grungy_pages));
+
+ bufp = GC_proc_buf;
+ if (READ(GC_proc_fd, bufp, GC_proc_buf_size) <= 0) {
+# ifdef PRINTSTATS
+ GC_printf1("/proc read failed: GC_proc_buf_size = %lu\n",
+ GC_proc_buf_size);
+# endif
+ {
+ /* Retry with larger buffer. */
+ word new_size = 2 * GC_proc_buf_size;
+ char * new_buf = GC_scratch_alloc(new_size);
+
+ if (new_buf != 0) {
+ GC_proc_buf = bufp = new_buf;
+ GC_proc_buf_size = new_size;
+ }
+ if (READ(GC_proc_fd, bufp, GC_proc_buf_size) <= 0) {
+ WARN("Insufficient space for /proc read\n", 0);
+ /* Punt: */
+ memset(GC_grungy_pages, 0xff, sizeof (page_hash_table));
+ memset(GC_written_pages, 0xff, sizeof(page_hash_table));
+# ifdef GC_SOLARIS_THREADS
+ BZERO(GC_fresh_pages,
+ MAX_FRESH_PAGES * sizeof (struct hblk *));
+# endif
+ return;
+ }
+ }
+ }
+ /* Copy dirty bits into GC_grungy_pages */
+ nmaps = ((struct prpageheader *)bufp) -> pr_nmap;
+ /* printf( "nmaps = %d, PG_REFERENCED = %d, PG_MODIFIED = %d\n",
+ nmaps, PG_REFERENCED, PG_MODIFIED); */
+ bufp = bufp + sizeof(struct prpageheader);
+ for (i = 0; i < nmaps; i++) {
+ map = (struct prasmap *)bufp;
+ vaddr = (ptr_t)(map -> pr_vaddr);
+ ps = map -> pr_pagesize;
+ np = map -> pr_npage;
+ /* printf("vaddr = 0x%X, ps = 0x%X, np = 0x%X\n", vaddr, ps, np); */
+ limit = vaddr + ps * np;
+ bufp += sizeof (struct prasmap);
+ for (current_addr = vaddr;
+ current_addr < limit; current_addr += ps){
+ if ((*bufp++) & PG_MODIFIED) {
+ register struct hblk * h = (struct hblk *) current_addr;
+
+ while ((ptr_t)h < current_addr + ps) {
+ register word index = PHT_HASH(h);
+
+ set_pht_entry_from_index(GC_grungy_pages, index);
+# ifdef GC_SOLARIS_THREADS
+ {
+ register int slot = FRESH_PAGE_SLOT(h);
+
+ if (GC_fresh_pages[slot] == h) {
+ GC_fresh_pages[slot] = 0;
+ }
+ }
+# endif
+ h++;
+ }
+ }
+ }
+ bufp += sizeof(long) - 1;
+ bufp = (char *)((unsigned long)bufp & ~(sizeof(long)-1));
+ }
+ /* Update GC_written_pages. */
+ GC_or_pages(GC_written_pages, GC_grungy_pages);
+# ifdef GC_SOLARIS_THREADS
+ /* Make sure that old stacks are considered completely clean */
+ /* unless written again. */
+ GC_old_stacks_are_fresh();
+# endif
+}
+
+#undef READ
+
+GC_bool GC_page_was_dirty(h)
+struct hblk *h;
+{
+ register word index = PHT_HASH(h);
+ register GC_bool result;
+
+ result = get_pht_entry_from_index(GC_grungy_pages, index);
+# ifdef GC_SOLARIS_THREADS
+ if (result && PAGE_IS_FRESH(h)) result = FALSE;
+ /* This happens only if page was declared fresh since */
+ /* the read_dirty call, e.g. because it's in an unused */
+ /* thread stack. It's OK to treat it as clean, in */
+ /* that case. And it's consistent with */
+ /* GC_page_was_ever_dirty. */
+# endif
+ return(result);
+}
+
+GC_bool GC_page_was_ever_dirty(h)
+struct hblk *h;
+{
+ register word index = PHT_HASH(h);
+ register GC_bool result;
+
+ result = get_pht_entry_from_index(GC_written_pages, index);
+# ifdef GC_SOLARIS_THREADS
+ if (result && PAGE_IS_FRESH(h)) result = FALSE;
+# endif
+ return(result);
+}
+
+/* Caller holds allocation lock. */
+void GC_is_fresh(h, n)
+struct hblk *h;
+word n;
+{
+
+ register word index;
+
+# ifdef GC_SOLARIS_THREADS
+ register word i;
+
+ if (GC_fresh_pages != 0) {
+ for (i = 0; i < n; i++) {
+ ADD_FRESH_PAGE(h + i);
+ }
+ }
+# endif
+}
+
+# endif /* PROC_VDB */
+
+
+# ifdef PCR_VDB
+
+# include "vd/PCR_VD.h"
+
+# define NPAGES (32*1024) /* 128 MB */
+
+PCR_VD_DB GC_grungy_bits[NPAGES];
+
+ptr_t GC_vd_base; /* Address corresponding to GC_grungy_bits[0] */
+ /* HBLKSIZE aligned. */
+
+void GC_dirty_init()
+{
+ GC_dirty_maintained = TRUE;
+ /* For the time being, we assume the heap generally grows up */
+ GC_vd_base = GC_heap_sects[0].hs_start;
+ if (GC_vd_base == 0) {
+ ABORT("Bad initial heap segment");
+ }
+ if (PCR_VD_Start(HBLKSIZE, GC_vd_base, NPAGES*HBLKSIZE)
+ != PCR_ERes_okay) {
+ ABORT("dirty bit initialization failed");
+ }
+}
+
+void GC_read_dirty()
+{
+ /* lazily enable dirty bits on newly added heap sects */
+ {
+ static int onhs = 0;
+ int nhs = GC_n_heap_sects;
+ for( ; onhs < nhs; onhs++ ) {
+ PCR_VD_WriteProtectEnable(
+ GC_heap_sects[onhs].hs_start,
+ GC_heap_sects[onhs].hs_bytes );
+ }
+ }
+
+
+ if (PCR_VD_Clear(GC_vd_base, NPAGES*HBLKSIZE, GC_grungy_bits)
+ != PCR_ERes_okay) {
+ ABORT("dirty bit read failed");
+ }
+}
+
+GC_bool GC_page_was_dirty(h)
+struct hblk *h;
+{
+ if((ptr_t)h < GC_vd_base || (ptr_t)h >= GC_vd_base + NPAGES*HBLKSIZE) {
+ return(TRUE);
+ }
+ return(GC_grungy_bits[h - (struct hblk *)GC_vd_base] & PCR_VD_DB_dirtyBit);
+}
+
+/*ARGSUSED*/
+void GC_remove_protection(h, nblocks, is_ptrfree)
+struct hblk *h;
+word nblocks;
+GC_bool is_ptrfree;
+{
+ PCR_VD_WriteProtectDisable(h, nblocks*HBLKSIZE);
+ PCR_VD_WriteProtectEnable(h, nblocks*HBLKSIZE);
+}
+
+# endif /* PCR_VDB */
+
+#if defined(MPROTECT_VDB) && defined(DARWIN)
+/* The following sources were used as a *reference* for this exception handling
+ code:
+ 1. Apple's mach/xnu documentation
+ 2. Timothy J. Wood's "Mach Exception Handlers 101" post to the
+ omnigroup's macosx-dev list.
+ www.omnigroup.com/mailman/archive/macosx-dev/2000-June/014178.html
+ 3. macosx-nat.c from Apple's GDB source code.
+*/
+
+/* The bug that caused all this trouble should now be fixed. This should
+ eventually be removed if all goes well. */
+/* define BROKEN_EXCEPTION_HANDLING */
+
+#include <mach/mach.h>
+#include <mach/mach_error.h>
+#include <mach/thread_status.h>
+#include <mach/exception.h>
+#include <mach/task.h>
+#include <pthread.h>
+
+/* These are not defined in any header, although they are documented */
+extern boolean_t exc_server(mach_msg_header_t *,mach_msg_header_t *);
+extern kern_return_t exception_raise(
+ mach_port_t,mach_port_t,mach_port_t,
+ exception_type_t,exception_data_t,mach_msg_type_number_t);
+extern kern_return_t exception_raise_state(
+ mach_port_t,mach_port_t,mach_port_t,
+ exception_type_t,exception_data_t,mach_msg_type_number_t,
+ thread_state_flavor_t*,thread_state_t,mach_msg_type_number_t,
+ thread_state_t,mach_msg_type_number_t*);
+extern kern_return_t exception_raise_state_identity(
+ mach_port_t,mach_port_t,mach_port_t,
+ exception_type_t,exception_data_t,mach_msg_type_number_t,
+ thread_state_flavor_t*,thread_state_t,mach_msg_type_number_t,
+ thread_state_t,mach_msg_type_number_t*);
+
+
+#define MAX_EXCEPTION_PORTS 16
+
+static struct {
+ mach_msg_type_number_t count;
+ exception_mask_t masks[MAX_EXCEPTION_PORTS];
+ exception_handler_t ports[MAX_EXCEPTION_PORTS];
+ exception_behavior_t behaviors[MAX_EXCEPTION_PORTS];
+ thread_state_flavor_t flavors[MAX_EXCEPTION_PORTS];
+} GC_old_exc_ports;
+
+static struct {
+ mach_port_t exception;
+#if defined(THREADS)
+ mach_port_t reply;
+#endif
+} GC_ports;
+
+typedef struct {
+ mach_msg_header_t head;
+} GC_msg_t;
+
+typedef enum {
+ GC_MP_NORMAL, GC_MP_DISCARDING, GC_MP_STOPPED
+} GC_mprotect_state_t;
+
+/* FIXME: 1 and 2 seem to be safe to use in the msgh_id field,
+ but it isn't documented. Use the source and see if they
+ should be ok. */
+#define ID_STOP 1
+#define ID_RESUME 2
+
+/* These values are only used on the reply port */
+#define ID_ACK 3
+
+#if defined(THREADS)
+
+GC_mprotect_state_t GC_mprotect_state;
+
+/* The following should ONLY be called when the world is stopped */
+static void GC_mprotect_thread_notify(mach_msg_id_t id) {
+ struct {
+ GC_msg_t msg;
+ mach_msg_trailer_t trailer;
+ } buf;
+ mach_msg_return_t r;
+ /* remote, local */
+ buf.msg.head.msgh_bits =
+ MACH_MSGH_BITS(MACH_MSG_TYPE_MAKE_SEND,0);
+ buf.msg.head.msgh_size = sizeof(buf.msg);
+ buf.msg.head.msgh_remote_port = GC_ports.exception;
+ buf.msg.head.msgh_local_port = MACH_PORT_NULL;
+ buf.msg.head.msgh_id = id;
+
+ r = mach_msg(
+ &buf.msg.head,
+ MACH_SEND_MSG|MACH_RCV_MSG|MACH_RCV_LARGE,
+ sizeof(buf.msg),
+ sizeof(buf),
+ GC_ports.reply,
+ MACH_MSG_TIMEOUT_NONE,
+ MACH_PORT_NULL);
+ if(r != MACH_MSG_SUCCESS)
+ ABORT("mach_msg failed in GC_mprotect_thread_notify");
+ if(buf.msg.head.msgh_id != ID_ACK)
+ ABORT("invalid ack in GC_mprotect_thread_notify");
+}
+
+/* Should only be called by the mprotect thread */
+static void GC_mprotect_thread_reply() {
+ GC_msg_t msg;
+ mach_msg_return_t r;
+ /* remote, local */
+ msg.head.msgh_bits =
+ MACH_MSGH_BITS(MACH_MSG_TYPE_MAKE_SEND,0);
+ msg.head.msgh_size = sizeof(msg);
+ msg.head.msgh_remote_port = GC_ports.reply;
+ msg.head.msgh_local_port = MACH_PORT_NULL;
+ msg.head.msgh_id = ID_ACK;
+
+ r = mach_msg(
+ &msg.head,
+ MACH_SEND_MSG,
+ sizeof(msg),
+ 0,
+ MACH_PORT_NULL,
+ MACH_MSG_TIMEOUT_NONE,
+ MACH_PORT_NULL);
+ if(r != MACH_MSG_SUCCESS)
+ ABORT("mach_msg failed in GC_mprotect_thread_reply");
+}
+
+void GC_mprotect_stop() {
+ GC_mprotect_thread_notify(ID_STOP);
+}
+void GC_mprotect_resume() {
+ GC_mprotect_thread_notify(ID_RESUME);
+}
+
+#else /* !THREADS */
+/* The compiler should optimize away any GC_mprotect_state computations */
+#define GC_mprotect_state GC_MP_NORMAL
+#endif
+
+static void *GC_mprotect_thread(void *arg) {
+ mach_msg_return_t r;
+ /* These two structures contain some private kernel data. We don't need to
+ access any of it so we don't bother defining a proper struct. The
+ correct definitions are in the xnu source code. */
+ struct {
+ mach_msg_header_t head;
+ char data[256];
+ } reply;
+ struct {
+ mach_msg_header_t head;
+ mach_msg_body_t msgh_body;
+ char data[1024];
+ } msg;
+
+ mach_msg_id_t id;
+
+ GC_darwin_register_mach_handler_thread(mach_thread_self());
+
+ for(;;) {
+ r = mach_msg(
+ &msg.head,
+ MACH_RCV_MSG|MACH_RCV_LARGE|
+ (GC_mprotect_state == GC_MP_DISCARDING ? MACH_RCV_TIMEOUT : 0),
+ 0,
+ sizeof(msg),
+ GC_ports.exception,
+ GC_mprotect_state == GC_MP_DISCARDING ? 0 : MACH_MSG_TIMEOUT_NONE,
+ MACH_PORT_NULL);
+
+ id = r == MACH_MSG_SUCCESS ? msg.head.msgh_id : -1;
+
+#if defined(THREADS)
+ if(GC_mprotect_state == GC_MP_DISCARDING) {
+ if(r == MACH_RCV_TIMED_OUT) {
+ GC_mprotect_state = GC_MP_STOPPED;
+ GC_mprotect_thread_reply();
+ continue;
+ }
+ if(r == MACH_MSG_SUCCESS && (id == ID_STOP || id == ID_RESUME))
+ ABORT("out of order mprotect thread request");
+ }
+#endif
+
+ if(r != MACH_MSG_SUCCESS) {
+ GC_err_printf2("mach_msg failed with %d %s\n",
+ (int)r,mach_error_string(r));
+ ABORT("mach_msg failed");
+ }
+
+ switch(id) {
+#if defined(THREADS)
+ case ID_STOP:
+ if(GC_mprotect_state != GC_MP_NORMAL)
+ ABORT("Called mprotect_stop when state wasn't normal");
+ GC_mprotect_state = GC_MP_DISCARDING;
+ break;
+ case ID_RESUME:
+ if(GC_mprotect_state != GC_MP_STOPPED)
+ ABORT("Called mprotect_resume when state wasn't stopped");
+ GC_mprotect_state = GC_MP_NORMAL;
+ GC_mprotect_thread_reply();
+ break;
+#endif /* THREADS */
+ default:
+ /* Handle the message (calls catch_exception_raise) */
+ if(!exc_server(&msg.head,&reply.head))
+ ABORT("exc_server failed");
+ /* Send the reply */
+ r = mach_msg(
+ &reply.head,
+ MACH_SEND_MSG,
+ reply.head.msgh_size,
+ 0,
+ MACH_PORT_NULL,
+ MACH_MSG_TIMEOUT_NONE,
+ MACH_PORT_NULL);
+ if(r != MACH_MSG_SUCCESS) {
+ /* This will fail if the thread dies, but the thread shouldn't
+ die... */
+ #ifdef BROKEN_EXCEPTION_HANDLING
+ GC_err_printf2(
+ "mach_msg failed with %d %s while sending exc reply\n",
+ (int)r,mach_error_string(r));
+ #else
+ ABORT("mach_msg failed while sending exception reply");
+ #endif
+ }
+ } /* switch */
+ } /* for(;;) */
+ /* NOT REACHED */
+ return NULL;
+}
+
+/* All this SIGBUS code shouldn't be necessary. All protection faults should
+ be going throught the mach exception handler. However, it seems a SIGBUS is
+ occasionally sent for some unknown reason. Even more odd, it seems to be
+ meaningless and safe to ignore. */
+#ifdef BROKEN_EXCEPTION_HANDLING
+
+typedef void (* SIG_PF)();
+static SIG_PF GC_old_bus_handler;
+
+/* Updates to this aren't atomic, but the SIGBUSs seem pretty rare.
+ Even if this doesn't get updated property, it isn't really a problem */
+static int GC_sigbus_count;
+
+static void GC_darwin_sigbus(int num,siginfo_t *sip,void *context) {
+ if(num != SIGBUS) ABORT("Got a non-sigbus signal in the sigbus handler");
+
+ /* Ugh... some seem safe to ignore, but too many in a row probably means
+ trouble. GC_sigbus_count is reset for each mach exception that is
+ handled */
+ if(GC_sigbus_count >= 8) {
+ ABORT("Got more than 8 SIGBUSs in a row!");
+ } else {
+ GC_sigbus_count++;
+ GC_err_printf0("GC: WARNING: Ignoring SIGBUS.\n");
+ }
+}
+#endif /* BROKEN_EXCEPTION_HANDLING */
+
+void GC_dirty_init() {
+ kern_return_t r;
+ mach_port_t me;
+ pthread_t thread;
+ pthread_attr_t attr;
+ exception_mask_t mask;
+
+# ifdef PRINTSTATS
+ GC_printf0("Inititalizing mach/darwin mprotect virtual dirty bit "
+ "implementation\n");
+# endif
+# ifdef BROKEN_EXCEPTION_HANDLING
+ GC_err_printf0("GC: WARNING: Enabling workarounds for various darwin "
+ "exception handling bugs.\n");
+# endif
+ GC_dirty_maintained = TRUE;
+ if (GC_page_size % HBLKSIZE != 0) {
+ GC_err_printf0("Page size not multiple of HBLKSIZE\n");
+ ABORT("Page size not multiple of HBLKSIZE");
+ }
+
+ GC_task_self = me = mach_task_self();
+
+ r = mach_port_allocate(me,MACH_PORT_RIGHT_RECEIVE,&GC_ports.exception);
+ if(r != KERN_SUCCESS) ABORT("mach_port_allocate failed (exception port)");
+
+ r = mach_port_insert_right(me,GC_ports.exception,GC_ports.exception,
+ MACH_MSG_TYPE_MAKE_SEND);
+ if(r != KERN_SUCCESS)
+ ABORT("mach_port_insert_right failed (exception port)");
+
+ #if defined(THREADS)
+ r = mach_port_allocate(me,MACH_PORT_RIGHT_RECEIVE,&GC_ports.reply);
+ if(r != KERN_SUCCESS) ABORT("mach_port_allocate failed (reply port)");
+ #endif
+
+ /* The exceptions we want to catch */
+ mask = EXC_MASK_BAD_ACCESS;
+
+ r = task_get_exception_ports(
+ me,
+ mask,
+ GC_old_exc_ports.masks,
+ &GC_old_exc_ports.count,
+ GC_old_exc_ports.ports,
+ GC_old_exc_ports.behaviors,
+ GC_old_exc_ports.flavors
+ );
+ if(r != KERN_SUCCESS) ABORT("task_get_exception_ports failed");
+
+ r = task_set_exception_ports(
+ me,
+ mask,
+ GC_ports.exception,
+ EXCEPTION_DEFAULT,
+ GC_MACH_THREAD_STATE
+ );
+ if(r != KERN_SUCCESS) ABORT("task_set_exception_ports failed");
+
+ if(pthread_attr_init(&attr) != 0) ABORT("pthread_attr_init failed");
+ if(pthread_attr_setdetachstate(&attr,PTHREAD_CREATE_DETACHED) != 0)
+ ABORT("pthread_attr_setdetachedstate failed");
+
+# undef pthread_create
+ /* This will call the real pthread function, not our wrapper */
+ if(pthread_create(&thread,&attr,GC_mprotect_thread,NULL) != 0)
+ ABORT("pthread_create failed");
+ pthread_attr_destroy(&attr);
+
+ /* Setup the sigbus handler for ignoring the meaningless SIGBUSs */
+ #ifdef BROKEN_EXCEPTION_HANDLING
+ {
+ struct sigaction sa, oldsa;
+ sa.sa_handler = (SIG_PF)GC_darwin_sigbus;
+ sigemptyset(&sa.sa_mask);
+ sa.sa_flags = SA_RESTART|SA_SIGINFO;
+ if(sigaction(SIGBUS,&sa,&oldsa) < 0) ABORT("sigaction");
+ GC_old_bus_handler = (SIG_PF)oldsa.sa_handler;
+ if (GC_old_bus_handler != SIG_DFL) {
+# ifdef PRINTSTATS
+ GC_err_printf0("Replaced other SIGBUS handler\n");
+# endif
+ }
+ }
+ #endif /* BROKEN_EXCEPTION_HANDLING */
+}
+
+/* The source code for Apple's GDB was used as a reference for the exception
+ forwarding code. This code is similar to be GDB code only because there is
+ only one way to do it. */
+static kern_return_t GC_forward_exception(
+ mach_port_t thread,
+ mach_port_t task,
+ exception_type_t exception,
+ exception_data_t data,
+ mach_msg_type_number_t data_count
+) {
+ int i;
+ kern_return_t r;
+ mach_port_t port;
+ exception_behavior_t behavior;
+ thread_state_flavor_t flavor;
+
+ thread_state_t thread_state;
+ mach_msg_type_number_t thread_state_count = THREAD_STATE_MAX;
+
+ for(i=0;i<GC_old_exc_ports.count;i++)
+ if(GC_old_exc_ports.masks[i] & (1 << exception))
+ break;
+ if(i==GC_old_exc_ports.count) ABORT("No handler for exception!");
+
+ port = GC_old_exc_ports.ports[i];
+ behavior = GC_old_exc_ports.behaviors[i];
+ flavor = GC_old_exc_ports.flavors[i];
+
+ if(behavior != EXCEPTION_DEFAULT) {
+ r = thread_get_state(thread,flavor,thread_state,&thread_state_count);
+ if(r != KERN_SUCCESS)
+ ABORT("thread_get_state failed in forward_exception");
+ }
+
+ switch(behavior) {
+ case EXCEPTION_DEFAULT:
+ r = exception_raise(port,thread,task,exception,data,data_count);
+ break;
+ case EXCEPTION_STATE:
+ r = exception_raise_state(port,thread,task,exception,data,
+ data_count,&flavor,thread_state,thread_state_count,
+ thread_state,&thread_state_count);
+ break;
+ case EXCEPTION_STATE_IDENTITY:
+ r = exception_raise_state_identity(port,thread,task,exception,data,
+ data_count,&flavor,thread_state,thread_state_count,
+ thread_state,&thread_state_count);
+ break;
+ default:
+ r = KERN_FAILURE; /* make gcc happy */
+ ABORT("forward_exception: unknown behavior");
+ break;
+ }
+
+ if(behavior != EXCEPTION_DEFAULT) {
+ r = thread_set_state(thread,flavor,thread_state,thread_state_count);
+ if(r != KERN_SUCCESS)
+ ABORT("thread_set_state failed in forward_exception");
+ }
+
+ return r;
+}
+
+#define FWD() GC_forward_exception(thread,task,exception,code,code_count)
+
+/* This violates the namespace rules but there isn't anything that can be done
+ about it. The exception handling stuff is hard coded to call this */
+kern_return_t
+catch_exception_raise(
+ mach_port_t exception_port,mach_port_t thread,mach_port_t task,
+ exception_type_t exception,exception_data_t code,
+ mach_msg_type_number_t code_count
+) {
+ kern_return_t r;
+ char *addr;
+ struct hblk *h;
+ int i;
+# if defined(POWERPC)
+# if CPP_WORDSZ == 32
+ thread_state_flavor_t flavor = PPC_EXCEPTION_STATE;
+ mach_msg_type_number_t exc_state_count = PPC_EXCEPTION_STATE_COUNT;
+ ppc_exception_state_t exc_state;
+# else
+ thread_state_flavor_t flavor = PPC_EXCEPTION_STATE64;
+ mach_msg_type_number_t exc_state_count = PPC_EXCEPTION_STATE64_COUNT;
+ ppc_exception_state64_t exc_state;
+# endif
+# elif defined(I386) || defined(X86_64)
+# if CPP_WORDSZ == 32
+ thread_state_flavor_t flavor = x86_EXCEPTION_STATE32;
+ mach_msg_type_number_t exc_state_count = x86_EXCEPTION_STATE32_COUNT;
+ x86_exception_state32_t exc_state;
+# else
+ thread_state_flavor_t flavor = x86_EXCEPTION_STATE64;
+ mach_msg_type_number_t exc_state_count = x86_EXCEPTION_STATE64_COUNT;
+ x86_exception_state64_t exc_state;
+# endif
+# else
+# error FIXME for non-ppc darwin
+# endif
+
+
+ if(exception != EXC_BAD_ACCESS || code[0] != KERN_PROTECTION_FAILURE) {
+ #ifdef DEBUG_EXCEPTION_HANDLING
+ /* We aren't interested, pass it on to the old handler */
+ GC_printf3("Exception: 0x%x Code: 0x%x 0x%x in catch....\n",
+ exception,
+ code_count > 0 ? code[0] : -1,
+ code_count > 1 ? code[1] : -1);
+ #endif
+ return FWD();
+ }
+
+ r = thread_get_state(thread,flavor,
+ (natural_t*)&exc_state,&exc_state_count);
+ if(r != KERN_SUCCESS) {
+ /* The thread is supposed to be suspended while the exception handler
+ is called. This shouldn't fail. */
+ #ifdef BROKEN_EXCEPTION_HANDLING
+ GC_err_printf0("thread_get_state failed in "
+ "catch_exception_raise\n");
+ return KERN_SUCCESS;
+ #else
+ ABORT("thread_get_state failed in catch_exception_raise");
+ #endif
+ }
+
+ /* This is the address that caused the fault */
+#if defined(POWERPC)
+ addr = (char*) exc_state. THREAD_FLD(dar);
+#elif defined (I386) || defined (X86_64)
+ addr = (char*) exc_state. THREAD_FLD(faultvaddr);
+#else
+# error FIXME for non POWERPC/I386
+#endif
+
+ if((HDR(addr)) == 0) {
+ /* Ugh... just like the SIGBUS problem above, it seems we get a bogus
+ KERN_PROTECTION_FAILURE every once and a while. We wait till we get
+ a bunch in a row before doing anything about it. If a "real" fault
+ ever occurres it'll just keep faulting over and over and we'll hit
+ the limit pretty quickly. */
+ #ifdef BROKEN_EXCEPTION_HANDLING
+ static char *last_fault;
+ static int last_fault_count;
+
+ if(addr != last_fault) {
+ last_fault = addr;
+ last_fault_count = 0;
+ }
+ if(++last_fault_count < 32) {
+ if(last_fault_count == 1)
+ GC_err_printf1(
+ "GC: WARNING: Ignoring KERN_PROTECTION_FAILURE at %p\n",
+ addr);
+ return KERN_SUCCESS;
+ }
+
+ GC_err_printf1("Unexpected KERN_PROTECTION_FAILURE at %p\n",addr);
+ /* Can't pass it along to the signal handler because that is
+ ignoring SIGBUS signals. We also shouldn't call ABORT here as
+ signals don't always work too well from the exception handler. */
+ GC_err_printf0("Aborting\n");
+ exit(EXIT_FAILURE);
+ #else /* BROKEN_EXCEPTION_HANDLING */
+ /* Pass it along to the next exception handler
+ (which should call SIGBUS/SIGSEGV) */
+ return FWD();
+ #endif /* !BROKEN_EXCEPTION_HANDLING */
+ }
+
+ #ifdef BROKEN_EXCEPTION_HANDLING
+ /* Reset the number of consecutive SIGBUSs */
+ GC_sigbus_count = 0;
+ #endif
+
+ if(GC_mprotect_state == GC_MP_NORMAL) { /* common case */
+ h = (struct hblk*)((word)addr & ~(GC_page_size-1));
+ UNPROTECT(h, GC_page_size);
+ for (i = 0; i < divHBLKSZ(GC_page_size); i++) {
+ register int index = PHT_HASH(h+i);
+ async_set_pht_entry_from_index(GC_dirty_pages, index);
+ }
+ } else if(GC_mprotect_state == GC_MP_DISCARDING) {
+ /* Lie to the thread for now. No sense UNPROTECT()ing the memory
+ when we're just going to PROTECT() it again later. The thread
+ will just fault again once it resumes */
+ } else {
+ /* Shouldn't happen, i don't think */
+ GC_printf0("KERN_PROTECTION_FAILURE while world is stopped\n");
+ return FWD();
+ }
+ return KERN_SUCCESS;
+}
+#undef FWD
+
+/* These should never be called, but just in case... */
+kern_return_t catch_exception_raise_state(mach_port_name_t exception_port,
+ int exception, exception_data_t code, mach_msg_type_number_t codeCnt,
+ int flavor, thread_state_t old_state, int old_stateCnt,
+ thread_state_t new_state, int new_stateCnt)
+{
+ ABORT("catch_exception_raise_state");
+ return(KERN_INVALID_ARGUMENT);
+}
+kern_return_t catch_exception_raise_state_identity(
+ mach_port_name_t exception_port, mach_port_t thread, mach_port_t task,
+ int exception, exception_data_t code, mach_msg_type_number_t codeCnt,
+ int flavor, thread_state_t old_state, int old_stateCnt,
+ thread_state_t new_state, int new_stateCnt)
+{
+ ABORT("catch_exception_raise_state_identity");
+ return(KERN_INVALID_ARGUMENT);
+}
+
+
+#endif /* DARWIN && MPROTECT_VDB */
+
+# ifndef HAVE_INCREMENTAL_PROTECTION_NEEDS
+ int GC_incremental_protection_needs()
+ {
+ return GC_PROTECTS_NONE;
+ }
+# endif /* !HAVE_INCREMENTAL_PROTECTION_NEEDS */
+
+/*
+ * Call stack save code for debugging.
+ * Should probably be in mach_dep.c, but that requires reorganization.
+ */
+
+/* I suspect the following works for most X86 *nix variants, so */
+/* long as the frame pointer is explicitly stored. In the case of gcc, */
+/* compiler flags (e.g. -fomit-frame-pointer) determine whether it is. */
+#if defined(I386) && defined(LINUX) && defined(SAVE_CALL_CHAIN)
+# include <features.h>
+
+ struct frame {
+ struct frame *fr_savfp;
+ long fr_savpc;
+ long fr_arg[NARGS]; /* All the arguments go here. */
+ };
+#endif
+
+#if defined(SPARC)
+# if defined(LINUX)
+# include <features.h>
+
+ struct frame {
+ long fr_local[8];
+ long fr_arg[6];
+ struct frame *fr_savfp;
+ long fr_savpc;
+# ifndef __arch64__
+ char *fr_stret;
+# endif
+ long fr_argd[6];
+ long fr_argx[0];
+ };
+# else
+# if defined(SUNOS4)
+# include <machine/frame.h>
+# else
+# if defined (DRSNX)
+# include <sys/sparc/frame.h>
+# else
+# if defined(OPENBSD)
+# include <frame.h>
+# else
+# if defined(FREEBSD) || defined(NETBSD)
+# include <machine/frame.h>
+# else
+# include <sys/frame.h>
+# endif
+# endif
+# endif
+# endif
+# endif
+# if NARGS > 6
+ --> We only know how to to get the first 6 arguments
+# endif
+#endif /* SPARC */
+
+#ifdef NEED_CALLINFO
+/* Fill in the pc and argument information for up to NFRAMES of my */
+/* callers. Ignore my frame and my callers frame. */
+
+#ifdef LINUX
+# include <unistd.h>
+#endif
+
+#endif /* NEED_CALLINFO */
+
+#if defined(GC_HAVE_BUILTIN_BACKTRACE)
+# include <execinfo.h>
+#endif
+
+#ifdef SAVE_CALL_CHAIN
+
+#if NARGS == 0 && NFRAMES % 2 == 0 /* No padding */ \
+ && defined(GC_HAVE_BUILTIN_BACKTRACE)
+
+#ifdef REDIRECT_MALLOC
+ /* Deal with possible malloc calls in backtrace by omitting */
+ /* the infinitely recursing backtrace. */
+# ifdef THREADS
+ __thread /* If your compiler doesn't understand this */
+ /* you could use something like pthread_getspecific. */
+# endif
+ GC_in_save_callers = FALSE;
+#endif
+
+void GC_save_callers (info)
+struct callinfo info[NFRAMES];
+{
+ void * tmp_info[NFRAMES + 1];
+ int npcs, i;
+# define IGNORE_FRAMES 1
+
+ /* We retrieve NFRAMES+1 pc values, but discard the first, since it */
+ /* points to our own frame. */
+# ifdef REDIRECT_MALLOC
+ if (GC_in_save_callers) {
+ info[0].ci_pc = (word)(&GC_save_callers);
+ for (i = 1; i < NFRAMES; ++i) info[i].ci_pc = 0;
+ return;
+ }
+ GC_in_save_callers = TRUE;
+# endif
+ GC_ASSERT(sizeof(struct callinfo) == sizeof(void *));
+ npcs = backtrace((void **)tmp_info, NFRAMES + IGNORE_FRAMES);
+ BCOPY(tmp_info+IGNORE_FRAMES, info, (npcs - IGNORE_FRAMES) * sizeof(void *));
+ for (i = npcs - IGNORE_FRAMES; i < NFRAMES; ++i) info[i].ci_pc = 0;
+# ifdef REDIRECT_MALLOC
+ GC_in_save_callers = FALSE;
+# endif
+}
+
+#else /* No builtin backtrace; do it ourselves */
+
+#if (defined(OPENBSD) || defined(NETBSD) || defined(FREEBSD)) && defined(SPARC)
+# define FR_SAVFP fr_fp
+# define FR_SAVPC fr_pc
+#else
+# define FR_SAVFP fr_savfp
+# define FR_SAVPC fr_savpc
+#endif
+
+#if defined(SPARC) && (defined(__arch64__) || defined(__sparcv9))
+# define BIAS 2047
+#else
+# define BIAS 0
+#endif
+
+void GC_save_callers (info)
+struct callinfo info[NFRAMES];
+{
+ struct frame *frame;
+ struct frame *fp;
+ int nframes = 0;
+# ifdef I386
+ /* We assume this is turned on only with gcc as the compiler. */
+ asm("movl %%ebp,%0" : "=r"(frame));
+ fp = frame;
+# else
+ frame = (struct frame *) GC_save_regs_in_stack ();
+ fp = (struct frame *)((long) frame -> FR_SAVFP + BIAS);
+#endif
+
+ for (; (!(fp HOTTER_THAN frame) && !(GC_stackbottom HOTTER_THAN (ptr_t)fp)
+ && (nframes < NFRAMES));
+ fp = (struct frame *)((long) fp -> FR_SAVFP + BIAS), nframes++) {
+ register int i;
+
+ info[nframes].ci_pc = fp->FR_SAVPC;
+# if NARGS > 0
+ for (i = 0; i < NARGS; i++) {
+ info[nframes].ci_arg[i] = ~(fp->fr_arg[i]);
+ }
+# endif /* NARGS > 0 */
+ }
+ if (nframes < NFRAMES) info[nframes].ci_pc = 0;
+}
+
+#endif /* No builtin backtrace */
+
+#endif /* SAVE_CALL_CHAIN */
+
+#ifdef NEED_CALLINFO
+
+/* Print info to stderr. We do NOT hold the allocation lock */
+void GC_print_callers (info)
+struct callinfo info[NFRAMES];
+{
+ register int i;
+ static int reentry_count = 0;
+ GC_bool stop = FALSE;
+
+ /* FIXME: This should probably use a different lock, so that we */
+ /* become callable with or without the allocation lock. */
+ LOCK();
+ ++reentry_count;
+ UNLOCK();
+
+# if NFRAMES == 1
+ GC_err_printf0("\tCaller at allocation:\n");
+# else
+ GC_err_printf0("\tCall chain at allocation:\n");
+# endif
+ for (i = 0; i < NFRAMES && !stop ; i++) {
+ if (info[i].ci_pc == 0) break;
+# if NARGS > 0
+ {
+ int j;
+
+ GC_err_printf0("\t\targs: ");
+ for (j = 0; j < NARGS; j++) {
+ if (j != 0) GC_err_printf0(", ");
+ GC_err_printf2("%d (0x%X)", ~(info[i].ci_arg[j]),
+ ~(info[i].ci_arg[j]));
+ }
+ GC_err_printf0("\n");
+ }
+# endif
+ if (reentry_count > 1) {
+ /* We were called during an allocation during */
+ /* a previous GC_print_callers call; punt. */
+ GC_err_printf1("\t\t##PC##= 0x%lx\n", info[i].ci_pc);
+ continue;
+ }
+ {
+# ifdef LINUX
+ FILE *pipe;
+# endif
+# if defined(GC_HAVE_BUILTIN_BACKTRACE) \
+ && !defined(GC_BACKTRACE_SYMBOLS_BROKEN)
+ char **sym_name =
+ backtrace_symbols((void **)(&(info[i].ci_pc)), 1);
+ char *name = sym_name[0];
+# else
+ char buf[40];
+ char *name = buf;
+ sprintf(buf, "##PC##= 0x%lx", info[i].ci_pc);
+# endif
+# if defined(LINUX) && !defined(SMALL_CONFIG)
+ /* Try for a line number. */
+ {
+# define EXE_SZ 100
+ static char exe_name[EXE_SZ];
+# define CMD_SZ 200
+ char cmd_buf[CMD_SZ];
+# define RESULT_SZ 200
+ static char result_buf[RESULT_SZ];
+ size_t result_len;
+ char *old_preload;
+# define PRELOAD_SZ 200
+ char preload_buf[PRELOAD_SZ];
+ static GC_bool found_exe_name = FALSE;
+ static GC_bool will_fail = FALSE;
+ int ret_code;
+ /* Try to get it via a hairy and expensive scheme. */
+ /* First we get the name of the executable: */
+ if (will_fail) goto out;
+ if (!found_exe_name) {
+ ret_code = readlink("/proc/self/exe", exe_name, EXE_SZ);
+ if (ret_code < 0 || ret_code >= EXE_SZ
+ || exe_name[0] != '/') {
+ will_fail = TRUE; /* Dont try again. */
+ goto out;
+ }
+ exe_name[ret_code] = '\0';
+ found_exe_name = TRUE;
+ }
+ /* Then we use popen to start addr2line -e <exe> <addr> */
+ /* There are faster ways to do this, but hopefully this */
+ /* isn't time critical. */
+ sprintf(cmd_buf, "/usr/bin/addr2line -f -e %s 0x%lx", exe_name,
+ (unsigned long)info[i].ci_pc);
+ old_preload = getenv ("LD_PRELOAD");
+ if (0 != old_preload) {
+ if (strlen (old_preload) >= PRELOAD_SZ) {
+ will_fail = TRUE;
+ goto out;
+ }
+ strcpy (preload_buf, old_preload);
+ unsetenv ("LD_PRELOAD");
+ }
+ pipe = popen(cmd_buf, "r");
+ if (0 != old_preload
+ && 0 != setenv ("LD_PRELOAD", preload_buf, 0)) {
+ WARN("Failed to reset LD_PRELOAD\n", 0);
+ }
+ if (pipe == NULL
+ || (result_len = fread(result_buf, 1, RESULT_SZ - 1, pipe))
+ == 0) {
+ if (pipe != NULL) pclose(pipe);
+ will_fail = TRUE;
+ goto out;
+ }
+ if (result_buf[result_len - 1] == '\n') --result_len;
+ result_buf[result_len] = 0;
+ if (result_buf[0] == '?'
+ || result_buf[result_len-2] == ':'
+ && result_buf[result_len-1] == '0') {
+ pclose(pipe);
+ goto out;
+ }
+ /* Get rid of embedded newline, if any. Test for "main" */
+ {
+ char * nl = strchr(result_buf, '\n');
+ if (nl != NULL && nl < result_buf + result_len) {
+ *nl = ':';
+ }
+ if (strncmp(result_buf, "main", nl - result_buf) == 0) {
+ stop = TRUE;
+ }
+ }
+ if (result_len < RESULT_SZ - 25) {
+ /* Add in hex address */
+ sprintf(result_buf + result_len, " [0x%lx]",
+ (unsigned long)info[i].ci_pc);
+ }
+ name = result_buf;
+ pclose(pipe);
+ out:;
+ }
+# endif /* LINUX */
+ GC_err_printf1("\t\t%s\n", name);
+# if defined(GC_HAVE_BUILTIN_BACKTRACE) \
+ && !defined(GC_BACKTRACE_SYMBOLS_BROKEN)
+ free(sym_name); /* May call GC_free; that's OK */
+# endif
+ }
+ }
+ LOCK();
+ --reentry_count;
+ UNLOCK();
+}
+
+#endif /* NEED_CALLINFO */
+
+
+
+#if defined(LINUX) && defined(__ELF__) && !defined(SMALL_CONFIG)
+
+/* Dump /proc/self/maps to GC_stderr, to enable looking up names for
+ addresses in FIND_LEAK output. */
+
+static word dump_maps(char *maps)
+{
+ GC_err_write(maps, strlen(maps));
+ return 1;
+}
+
+void GC_print_address_map()
+{
+ GC_err_printf0("---------- Begin address map ----------\n");
+ GC_apply_to_maps(dump_maps);
+ GC_err_printf0("---------- End address map ----------\n");
+}
+
+#endif
+
+
Added: llvm-gcc-4.2/trunk/boehm-gc/pc_excludes
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/pc_excludes?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/pc_excludes (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/pc_excludes Thu Nov 8 16:56:19 2007
@@ -0,0 +1,21 @@
+solaris_threads.c
+solaris_pthreads.c
+irix_threads.c
+pcr_interface.c
+real_malloc.c
+mips_mach_dep.s
+rs6000_mach_dep.s
+alpha_mach_dep.s
+sparc_mach_dep.s
+PCR-Makefile
+setjmp_t.c
+callprocs
+doc/gc.man
+pc_excludes
+barrett_diagram
+include/gc_c++.h
+include/gc_inline.h
+doc/README.hp
+doc/README.rs6000
+doc/README.sgi
+
Added: llvm-gcc-4.2/trunk/boehm-gc/pcr_interface.c
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/pcr_interface.c?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/pcr_interface.c (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/pcr_interface.c Thu Nov 8 16:56:19 2007
@@ -0,0 +1,178 @@
+/*
+ * Copyright (c) 1991-1994 by Xerox Corporation. All rights reserved.
+ *
+ * THIS MATERIAL IS PROVIDED AS IS, WITH ABSOLUTELY NO WARRANTY EXPRESSED
+ * OR IMPLIED. ANY USE IS AT YOUR OWN RISK.
+ *
+ * Permission is hereby granted to use or copy this program
+ * for any purpose, provided the above notices are retained on all copies.
+ * Permission to modify the code and to distribute modified code is granted,
+ * provided the above notices are retained, and a notice that the code was
+ * modified is included with the above copyright notice.
+ */
+# include "private/gc_priv.h"
+
+# ifdef PCR
+/*
+ * Note that POSIX PCR requires an ANSI C compiler. Hence we are allowed
+ * to make the same assumption here.
+ * We wrap all of the allocator functions to avoid questions of
+ * compatibility between the prototyped and nonprototyped versions of the f
+ */
+# include "config/PCR_StdTypes.h"
+# include "mm/PCR_MM.h"
+# include <errno.h>
+
+# define MY_MAGIC 17L
+# define MY_DEBUGMAGIC 42L
+
+void * GC_AllocProc(size_t size, PCR_Bool ptrFree, PCR_Bool clear )
+{
+ if (ptrFree) {
+ void * result = (void *)GC_malloc_atomic(size);
+ if (clear && result != 0) BZERO(result, size);
+ return(result);
+ } else {
+ return((void *)GC_malloc(size));
+ }
+}
+
+void * GC_DebugAllocProc(size_t size, PCR_Bool ptrFree, PCR_Bool clear )
+{
+ if (ptrFree) {
+ void * result = (void *)GC_debug_malloc_atomic(size, __FILE__,
+ __LINE__);
+ if (clear && result != 0) BZERO(result, size);
+ return(result);
+ } else {
+ return((void *)GC_debug_malloc(size, __FILE__, __LINE__));
+ }
+}
+
+# define GC_ReallocProc GC_realloc
+void * GC_DebugReallocProc(void * old_object, size_t new_size_in_bytes)
+{
+ return(GC_debug_realloc(old_object, new_size_in_bytes, __FILE__, __LINE__));
+}
+
+# define GC_FreeProc GC_free
+# define GC_DebugFreeProc GC_debug_free
+
+typedef struct {
+ PCR_ERes (*ed_proc)(void *p, size_t size, PCR_Any data);
+ GC_bool ed_pointerfree;
+ PCR_ERes ed_fail_code;
+ PCR_Any ed_client_data;
+} enumerate_data;
+
+void GC_enumerate_block(h, ed)
+register struct hblk *h;
+enumerate_data * ed;
+{
+ register hdr * hhdr;
+ register int sz;
+ word *p;
+ word * lim;
+
+ hhdr = HDR(h);
+ sz = hhdr -> hb_sz;
+ if (sz >= 0 && ed -> ed_pointerfree
+ || sz <= 0 && !(ed -> ed_pointerfree)) return;
+ if (sz < 0) sz = -sz;
+ lim = (word *)(h+1) - sz;
+ p = (word *)h;
+ do {
+ if (PCR_ERes_IsErr(ed -> ed_fail_code)) return;
+ ed -> ed_fail_code =
+ (*(ed -> ed_proc))(p, WORDS_TO_BYTES(sz), ed -> ed_client_data);
+ p+= sz;
+ } while (p <= lim);
+}
+
+struct PCR_MM_ProcsRep * GC_old_allocator = 0;
+
+PCR_ERes GC_EnumerateProc(
+ PCR_Bool ptrFree,
+ PCR_ERes (*proc)(void *p, size_t size, PCR_Any data),
+ PCR_Any data
+)
+{
+ enumerate_data ed;
+
+ ed.ed_proc = proc;
+ ed.ed_pointerfree = ptrFree;
+ ed.ed_fail_code = PCR_ERes_okay;
+ ed.ed_client_data = data;
+ GC_apply_to_all_blocks(GC_enumerate_block, &ed);
+ if (ed.ed_fail_code != PCR_ERes_okay) {
+ return(ed.ed_fail_code);
+ } else {
+ /* Also enumerate objects allocated by my predecessors */
+ return((*(GC_old_allocator->mmp_enumerate))(ptrFree, proc, data));
+ }
+}
+
+void GC_DummyFreeProc(void *p) {}
+
+void GC_DummyShutdownProc(void) {}
+
+struct PCR_MM_ProcsRep GC_Rep = {
+ MY_MAGIC,
+ GC_AllocProc,
+ GC_ReallocProc,
+ GC_DummyFreeProc, /* mmp_free */
+ GC_FreeProc, /* mmp_unsafeFree */
+ GC_EnumerateProc,
+ GC_DummyShutdownProc /* mmp_shutdown */
+};
+
+struct PCR_MM_ProcsRep GC_DebugRep = {
+ MY_DEBUGMAGIC,
+ GC_DebugAllocProc,
+ GC_DebugReallocProc,
+ GC_DummyFreeProc, /* mmp_free */
+ GC_DebugFreeProc, /* mmp_unsafeFree */
+ GC_EnumerateProc,
+ GC_DummyShutdownProc /* mmp_shutdown */
+};
+
+GC_bool GC_use_debug = 0;
+
+void GC_pcr_install()
+{
+ PCR_MM_Install((GC_use_debug? &GC_DebugRep : &GC_Rep), &GC_old_allocator);
+}
+
+PCR_ERes
+PCR_GC_Setup(void)
+{
+ return PCR_ERes_okay;
+}
+
+PCR_ERes
+PCR_GC_Run(void)
+{
+
+ if( !PCR_Base_TestPCRArg("-nogc") ) {
+ GC_quiet = ( PCR_Base_TestPCRArg("-gctrace") ? 0 : 1 );
+ GC_use_debug = (GC_bool)PCR_Base_TestPCRArg("-debug_alloc");
+ GC_init();
+ if( !PCR_Base_TestPCRArg("-nogc_incremental") ) {
+ /*
+ * awful hack to test whether VD is implemented ...
+ */
+ if( PCR_VD_Start( 0, NIL, 0) != PCR_ERes_FromErr(ENOSYS) ) {
+ GC_enable_incremental();
+ }
+ }
+ }
+ return PCR_ERes_okay;
+}
+
+void GC_push_thread_structures(void)
+{
+ /* PCR doesn't work unless static roots are pushed. Can't get here. */
+ ABORT("In GC_push_thread_structures()");
+}
+
+# endif
Added: llvm-gcc-4.2/trunk/boehm-gc/powerpc_darwin_mach_dep.s
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/powerpc_darwin_mach_dep.s?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/powerpc_darwin_mach_dep.s (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/powerpc_darwin_mach_dep.s Thu Nov 8 16:56:19 2007
@@ -0,0 +1,95 @@
+#if defined(__ppc64__)
+#define MODE_CHOICE(x, y) y
+#else
+#define MODE_CHOICE(x, y) x
+#endif
+
+#define lgu MODE_CHOICE(lwzu, ldu)
+
+#define g_long MODE_CHOICE(long, quad) /* usage is ".g_long" */
+
+#define LOG2_GPR_BYTES MODE_CHOICE(2,3) /* log2(GPR_BYTES) */
+
+; GC_push_regs function. Under some optimization levels GCC will clobber
+; some of the non-volatile registers before we get a chance to save them
+; therefore, this cannot be inline asm.
+
+.text
+ .align LOG2_GPR_BYTES
+ .globl _GC_push_regs
+_GC_push_regs:
+
+ ; Prolog
+ mflr r0
+ stw r0,8(r1)
+ stwu r1,-80(r1)
+
+ ; Push r13-r31
+ mr r3,r13
+ bl L_GC_push_one$stub
+ mr r3,r14
+ bl L_GC_push_one$stub
+ mr r3,r15
+ bl L_GC_push_one$stub
+ mr r3,r16
+ bl L_GC_push_one$stub
+ mr r3,r17
+ bl L_GC_push_one$stub
+ mr r3,r18
+ bl L_GC_push_one$stub
+ mr r3,r19
+ bl L_GC_push_one$stub
+ mr r3,r20
+ bl L_GC_push_one$stub
+ mr r3,r21
+ bl L_GC_push_one$stub
+ mr r3,r22
+ bl L_GC_push_one$stub
+ mr r3,r23
+ bl L_GC_push_one$stub
+ mr r3,r24
+ bl L_GC_push_one$stub
+ mr r3,r25
+ bl L_GC_push_one$stub
+ mr r3,r26
+ bl L_GC_push_one$stub
+ mr r3,r27
+ bl L_GC_push_one$stub
+ mr r3,r28
+ bl L_GC_push_one$stub
+ mr r3,r29
+ bl L_GC_push_one$stub
+ mr r3,r30
+ bl L_GC_push_one$stub
+ mr r3,r31
+ bl L_GC_push_one$stub
+
+ ;
+ lwz r0,88(r1)
+ addi r1,r1,80
+ mtlr r0
+
+ ; Return
+ blr
+
+; PIC stuff, generated by GCC
+
+.data
+.section __TEXT,__picsymbolstub1,symbol_stubs,pure_instructions,32
+ .align LOG2_GPR_BYTES
+L_GC_push_one$stub:
+ .indirect_symbol _GC_push_one
+ mflr r0
+ bcl 20,31,L0$_GC_push_one
+L0$_GC_push_one:
+ mflr r11
+ addis r11,r11,ha16(L_GC_push_one$lazy_ptr-L0$_GC_push_one)
+ mtlr r0
+ lgu r12,lo16(L_GC_push_one$lazy_ptr-L0$_GC_push_one)(r11)
+ mtctr r12
+ bctr
+.data
+.lazy_symbol_pointer
+L_GC_push_one$lazy_ptr:
+ .indirect_symbol _GC_push_one
+ .g_long dyld_stub_binding_helper
Added: llvm-gcc-4.2/trunk/boehm-gc/pthread_stop_world.c
URL: http://llvm.org/viewvc/llvm-project/llvm-gcc-4.2/trunk/boehm-gc/pthread_stop_world.c?rev=43913&view=auto
==============================================================================
--- llvm-gcc-4.2/trunk/boehm-gc/pthread_stop_world.c (added)
+++ llvm-gcc-4.2/trunk/boehm-gc/pthread_stop_world.c Thu Nov 8 16:56:19 2007
@@ -0,0 +1,572 @@
+#include "private/pthread_support.h"
+
+#if defined(GC_PTHREADS) && !defined(GC_SOLARIS_THREADS) \
+ && !defined(GC_WIN32_THREADS) && !defined(GC_DARWIN_THREADS)
+
+#include <signal.h>
+#include <semaphore.h>
+#include <errno.h>
+#include <unistd.h>
+#include <sys/time.h>
+#ifndef HPUX
+# include <sys/select.h>
+ /* Doesn't exist on HP/UX 11.11. */
+#endif
+
+void suspend_self();
+
+#if DEBUG_THREADS
+
+#ifndef NSIG
+# if defined(MAXSIG)
+# define NSIG (MAXSIG+1)
+# elif defined(_NSIG)
+# define NSIG _NSIG
+# elif defined(__SIGRTMAX)
+# define NSIG (__SIGRTMAX+1)
+# else
+ --> please fix it
+# endif
+#endif
+
+void GC_print_sig_mask()
+{
+ sigset_t blocked;
+ int i;
+
+ if (pthread_sigmask(SIG_BLOCK, NULL, &blocked) != 0)
+ ABORT("pthread_sigmask");
+ GC_printf0("Blocked: ");
+ for (i = 1; i < NSIG; i++) {
+ if (sigismember(&blocked, i)) { GC_printf1("%ld ",(long) i); }
+ }
+ GC_printf0("\n");
+}
+
+#endif
+
+/* Remove the signals that we want to allow in thread stopping */
+/* handler from a set. */
+void GC_remove_allowed_signals(sigset_t *set)
+{
+# ifdef NO_SIGNALS
+ if (sigdelset(set, SIGINT) != 0
+ || sigdelset(set, SIGQUIT) != 0
+ || sigdelset(set, SIGABRT) != 0
+ || sigdelset(set, SIGTERM) != 0) {
+ ABORT("sigdelset() failed");
+ }
+# endif
+
+# ifdef MPROTECT_VDB
+ /* Handlers write to the thread structure, which is in the heap, */
+ /* and hence can trigger a protection fault. */
+ if (sigdelset(set, SIGSEGV) != 0
+# ifdef SIGBUS
+ || sigdelset(set, SIGBUS) != 0
+# endif
+ ) {
+ ABORT("sigdelset() failed");
+ }
+# endif
+}
+
+static sigset_t suspend_handler_mask;
+
+volatile sig_atomic_t GC_stop_count;
+ /* Incremented at the beginning of GC_stop_world. */
+
+volatile sig_atomic_t GC_world_is_stopped = FALSE;
+ /* FALSE ==> it is safe for threads to restart, i.e. */
+ /* they will see another suspend signal before they */
+ /* are expected to stop (unless they have voluntarily */
+ /* stopped). */
+
+void GC_brief_async_signal_safe_sleep()
+{
+ struct timeval tv;
+ tv.tv_sec = 0;
+ tv.tv_usec = 1000 * TIME_LIMIT / 2;
+ select(0, 0, 0, 0, &tv);
+}
+
+#ifdef GC_OSF1_THREADS
+ GC_bool GC_retry_signals = TRUE;
+#else
+ GC_bool GC_retry_signals = FALSE;
+#endif
+
+/*
+ * We use signals to stop threads during GC.
+ *
+ * Suspended threads wait in signal handler for SIG_THR_RESTART.
+ * That's more portable than semaphores or condition variables.
+ * (We do use sem_post from a signal handler, but that should be portable.)
+ *
+ * The thread suspension signal SIG_SUSPEND is now defined in gc_priv.h.
+ * Note that we can't just stop a thread; we need it to save its stack
+ * pointer(s) and acknowledge.
+ */
+
+#ifndef SIG_THR_RESTART
+# if defined(GC_HPUX_THREADS) || defined(GC_OSF1_THREADS)
+# ifdef _SIGRTMIN
+# define SIG_THR_RESTART _SIGRTMIN + 5
+# else
+# define SIG_THR_RESTART SIGRTMIN + 5
+# endif
+# else
+# define SIG_THR_RESTART SIGXCPU
+# endif
+#endif
+
+sem_t GC_suspend_ack_sem;
+
+void GC_suspend_handler_inner(ptr_t sig_arg);
+
+#if defined(IA64) || defined(HP_PA)
+extern void GC_with_callee_saves_pushed();
+
+void GC_suspend_handler(int sig)
+{
+ GC_thread me = GC_lookup_thread (pthread_self());
+ if (me -> flags & SUSPENDED)
+ suspend_self();
+ else {
+ int old_errno = errno;
+ GC_with_callee_saves_pushed(GC_suspend_handler_inner, (ptr_t)(word)sig);
+ errno = old_errno;
+ }
+}
+
+#else
+/* We believe that in all other cases the full context is already */
+/* in the signal handler frame. */
+void GC_suspend_handler(int sig)
+{
+ GC_thread me = GC_lookup_thread(pthread_self());
+ if (me -> flags & SUSPENDED)
+ suspend_self();
+ else {
+ int old_errno = errno;
+ GC_suspend_handler_inner((ptr_t)(word)sig);
+ errno = old_errno;
+ }
+}
+#endif
+
+void GC_suspend_handler_inner(ptr_t sig_arg)
+{
+ int sig = (int)(word)sig_arg;
+ int dummy;
+ pthread_t my_thread = pthread_self();
+ GC_thread me;
+# ifdef PARALLEL_MARK
+ word my_mark_no = GC_mark_no;
+ /* Marker can't proceed until we acknowledge. Thus this is */
+ /* guaranteed to be the mark_no correspending to our */
+ /* suspension, i.e. the marker can't have incremented it yet. */
+# endif
+ word my_stop_count = GC_stop_count;
+
+ if (sig != SIG_SUSPEND) ABORT("Bad signal in suspend_handler");
+
+#if DEBUG_THREADS
+ GC_printf1("Suspending 0x%lx\n", my_thread);
+#endif
+
+ me = GC_lookup_thread(my_thread);
+ /* The lookup here is safe, since I'm doing this on behalf */
+ /* of a thread which holds the allocation lock in order */
+ /* to stop the world. Thus concurrent modification of the */
+ /* data structure is impossible. */
+ if (me -> stop_info.last_stop_count == my_stop_count) {
+ /* Duplicate signal. OK if we are retrying. */
+ if (!GC_retry_signals) {
+ WARN("Duplicate suspend signal in thread %lx\n",
+ pthread_self());
+ }
+ return;
+ }
+# ifdef SPARC
+ me -> stop_info.stack_ptr = (ptr_t)GC_save_regs_in_stack();
+# else
+ me -> stop_info.stack_ptr = (ptr_t)(&dummy);
+# endif
+# ifdef IA64
+ me -> backing_store_ptr = (ptr_t)GC_save_regs_in_stack();
+# endif
+
+ /* Tell the thread that wants to stop the world that this */
+ /* thread has been stopped. Note that sem_post() is */
+ /* the only async-signal-safe primitive in LinuxThreads. */
+ sem_post(&GC_suspend_ack_sem);
+ me -> stop_info.last_stop_count = my_stop_count;
+
+ /* Wait until that thread tells us to restart by sending */
+ /* this thread a SIG_THR_RESTART signal. */
+ /* SIG_THR_RESTART should be masked at this point. Thus there */
+ /* is no race. */
+ /* We do not continue until we receive a SIG_THR_RESTART, */
+ /* but we do not take that as authoritative. (We may be */
+ /* accidentally restarted by one of the user signals we */
+ /* don't block.) After we receive the signal, we use a */
+ /* primitive and expensive mechanism to wait until it's */
+ /* really safe to proceed. Under normal circumstances, */
+ /* this code should not be executed. */
+ sigsuspend(&suspend_handler_mask); /* Wait for signal */
+ while (GC_world_is_stopped && GC_stop_count == my_stop_count) {
+ GC_brief_async_signal_safe_sleep();
+# if DEBUG_THREADS
+ GC_err_printf0("Sleeping in signal handler");
+# endif
+ }
+ /* If the RESTART signal gets lost, we can still lose. That should be */
+ /* less likely than losing the SUSPEND signal, since we don't do much */
+ /* between the sem_post and sigsuspend. */
+ /* We'd need more handshaking to work around that. */
+ /* Simply dropping the sigsuspend call should be safe, but is unlikely */
+ /* to be efficient. */
+
+#if DEBUG_THREADS
+ GC_printf1("Continuing 0x%lx\n", my_thread);
+#endif
+}
+
+void GC_restart_handler(int sig)
+{
+ pthread_t my_thread = pthread_self();
+
+ if (sig != SIG_THR_RESTART) ABORT("Bad signal in suspend_handler");
+
+ /*
+ ** Note: even if we don't do anything useful here,
+ ** it would still be necessary to have a signal handler,
+ ** rather than ignoring the signals, otherwise
+ ** the signals will not be delivered at all, and
+ ** will thus not interrupt the sigsuspend() above.
+ */
+
+#if DEBUG_THREADS
+ GC_printf1("In GC_restart_handler for 0x%lx\n", pthread_self());
+#endif
+}
+
+# ifdef IA64
+# define IF_IA64(x) x
+# else
+# define IF_IA64(x)
+# endif
+/* We hold allocation lock. Should do exactly the right thing if the */
+/* world is stopped. Should not fail if it isn't. */
+void GC_push_all_stacks()
+{
+ GC_bool found_me = FALSE;
+ int i;
+ GC_thread p;
+ ptr_t lo, hi;
+ /* On IA64, we also need to scan the register backing store. */
+ IF_IA64(ptr_t bs_lo; ptr_t bs_hi;)
+ pthread_t me = pthread_self();
+
+ if (!GC_thr_initialized) GC_thr_init();
+ #if DEBUG_THREADS
+ GC_printf1("Pushing stacks from thread 0x%lx\n", (unsigned long) me);
+ #endif
+ for (i = 0; i < THREAD_TABLE_SZ; i++) {
+ for (p = GC_threads[i]; p != 0; p = p -> next) {
+ if (p -> flags & FINISHED) continue;
+ if (pthread_equal(p -> id, me)) {
+# ifdef SPARC
+ lo = (ptr_t)GC_save_regs_in_stack();
+# else
+ lo = GC_approx_sp();
+# endif
+ found_me = TRUE;
+ IF_IA64(bs_hi = (ptr_t)GC_save_regs_in_stack();)
+ } else {
+ lo = p -> stop_info.stack_ptr;
+ IF_IA64(bs_hi = p -> backing_store_ptr;)
+ }
+ if ((p -> flags & MAIN_THREAD) == 0) {
+ hi = p -> stack_end;
+ IF_IA64(bs_lo = p -> backing_store_end);
+ } else {
+ /* The original stack. */
+ hi = GC_stackbottom;
+ IF_IA64(bs_lo = BACKING_STORE_BASE;)
+ }
+ #if DEBUG_THREADS
+ GC_printf3("Stack for thread 0x%lx = [%lx,%lx)\n",
+ (unsigned long) p -> id,
+ (unsigned long) lo, (unsigned long) hi);
+ #endif
+ if (0 == lo) ABORT("GC_push_all_stacks: sp not set!\n");
+# ifdef STACK_GROWS_UP
+ /* We got them backwards! */
+ GC_push_all_stack(hi, lo);
+# else
+ GC_push_all_stack(lo, hi);
+# endif
+# ifdef IA64
+# if DEBUG_THREADS
+ GC_printf3("Reg stack for thread 0x%lx = [%lx,%lx)\n",
+ (unsigned long) p -> id,
+ (unsigned long) bs_lo, (unsigned long) bs_hi);
+# endif
+ if (pthread_equal(p -> id, me)) {
+ GC_push_all_eager(bs_lo, bs_hi);
+ } else {
+ GC_push_all_stack(bs_lo, bs_hi);
+ }
+# endif
+ }
+ }
+ if (!found_me && !GC_in_thread_creation)
+ ABORT("Collecting from unknown thread.");
+}
+
+/* There seems to be a very rare thread stopping problem. To help us */
+/* debug that, we save the ids of the stopping thread. */
+pthread_t GC_stopping_thread;
+int GC_stopping_pid;
+
+/* We hold the allocation lock. Suspend all threads that might */
+/* still be running. Return the number of suspend signals that */
+/* were sent. */
+int GC_suspend_all()
+{
+ int n_live_threads = 0;
+ int i;
+ GC_thread p;
+ int result;
+ pthread_t my_thread = pthread_self();
+
+ GC_stopping_thread = my_thread; /* debugging only. */
+ GC_stopping_pid = getpid(); /* debugging only. */
+ for (i = 0; i < THREAD_TABLE_SZ; i++) {
+ for (p = GC_threads[i]; p != 0; p = p -> next) {
+ if (p -> id != my_thread) {
+ if (p -> flags & FINISHED) continue;
+ if (p -> stop_info.last_stop_count == GC_stop_count) continue;
+ if (p -> thread_blocked) /* Will wait */ continue;
+ n_live_threads++;
+ #if DEBUG_THREADS
+ GC_printf1("Sending suspend signal to 0x%lx\n", p -> id);
+ #endif
+
+ result = pthread_kill(p -> id, SIG_SUSPEND);
+ switch(result) {
+ case ESRCH:
+ /* Not really there anymore. Possible? */
+ n_live_threads--;
+ break;
+ case 0:
+ break;
+ default:
+ ABORT("pthread_kill failed");
+ }
+ }
+ }
+ }
+ return n_live_threads;
+}
+
+/* Caller holds allocation lock. */
+void GC_stop_world()
+{
+ int i;
+ int n_live_threads;
+ int code;
+
+ #if DEBUG_THREADS
+ GC_printf1("Stopping the world from 0x%lx\n", pthread_self());
+ #endif
+
+ /* Make sure all free list construction has stopped before we start. */
+ /* No new construction can start, since free list construction is */
+ /* required to acquire and release the GC lock before it starts, */
+ /* and we have the lock. */
+# ifdef PARALLEL_MARK
+ GC_acquire_mark_lock();
+ GC_ASSERT(GC_fl_builder_count == 0);
+ /* We should have previously waited for it to become zero. */
+# endif /* PARALLEL_MARK */
+ ++GC_stop_count;
+ GC_world_is_stopped = TRUE;
+ n_live_threads = GC_suspend_all();
+
+ if (GC_retry_signals) {
+ unsigned long wait_usecs = 0; /* Total wait since retry. */
+# define WAIT_UNIT 3000
+# define RETRY_INTERVAL 100000
+ for (;;) {
+ int ack_count;
+
+ sem_getvalue(&GC_suspend_ack_sem, &ack_count);
+ if (ack_count == n_live_threads) break;
+ if (wait_usecs > RETRY_INTERVAL) {
+ int newly_sent = GC_suspend_all();
+
+# ifdef CONDPRINT
+ if (GC_print_stats) {
+ GC_printf1("Resent %ld signals after timeout\n",
+ newly_sent);
+ }
+# endif
+ sem_getvalue(&GC_suspend_ack_sem, &ack_count);
+ if (newly_sent < n_live_threads - ack_count) {
+ WARN("Lost some threads during GC_stop_world?!\n",0);
+ n_live_threads = ack_count + newly_sent;
+ }
+ wait_usecs = 0;
+ }
+ usleep(WAIT_UNIT);
+ wait_usecs += WAIT_UNIT;
+ }
+ }
+ for (i = 0; i < n_live_threads; i++) {
+ while (0 != (code = sem_wait(&GC_suspend_ack_sem))) {
+ if (errno != EINTR) {
+ GC_err_printf1("Sem_wait returned %ld\n", (unsigned long)code);
+ ABORT("sem_wait for handler failed");
+ }
+ }
+ }
+# ifdef PARALLEL_MARK
+ GC_release_mark_lock();
+# endif
+ #if DEBUG_THREADS
+ GC_printf1("World stopped from 0x%lx\n", pthread_self());
+ #endif
+ GC_stopping_thread = 0; /* debugging only */
+}
+
+void suspend_self() {
+ GC_thread me = GC_lookup_thread(pthread_self());
+ if (me == NULL)
+ ABORT("attempting to suspend unknown thread");
+
+ me -> flags |= SUSPENDED;
+ GC_start_blocking();
+ while (me -> flags & SUSPENDED)
+ GC_brief_async_signal_safe_sleep();
+ GC_end_blocking();
+}
+
+void GC_suspend_thread(pthread_t thread) {
+ if (thread == pthread_self())
+ suspend_self();
+ else {
+ int result;
+ GC_thread t = GC_lookup_thread(thread);
+ if (t == NULL)
+ ABORT("attempting to suspend unknown thread");
+
+ t -> flags |= SUSPENDED;
+ result = pthread_kill (t -> id, SIG_SUSPEND);
+ switch (result) {
+ case ESRCH:
+ case 0:
+ break;
+ default:
+ ABORT("pthread_kill failed");
+ }
+ }
+}
+
+void GC_resume_thread(pthread_t thread) {
+ GC_thread t = GC_lookup_thread(thread);
+ if (t == NULL)
+ ABORT("attempting to resume unknown thread");
+
+ t -> flags &= ~SUSPENDED;
+}
+
+/* Caller holds allocation lock, and has held it continuously since */
+/* the world stopped. */
+void GC_start_world()
+{
+ pthread_t my_thread = pthread_self();
+ register int i;
+ register GC_thread p;
+ register int n_live_threads = 0;
+ register int result;
+
+# if DEBUG_THREADS
+ GC_printf0("World starting\n");
+# endif
+
+ GC_world_is_stopped = FALSE;
+ for (i = 0; i < THREAD_TABLE_SZ; i++) {
+ for (p = GC_threads[i]; p != 0; p = p -> next) {
+ if (p -> id != my_thread) {
+ if (p -> flags & FINISHED) continue;
+ if (p -> thread_blocked) continue;
+ n_live_threads++;
+ #if DEBUG_THREADS
+ GC_printf1("Sending restart signal to 0x%lx\n", p -> id);
+ #endif
+ result = pthread_kill(p -> id, SIG_THR_RESTART);
+ switch(result) {
+ case ESRCH:
+ /* Not really there anymore. Possible? */
+ n_live_threads--;
+ break;
+ case 0:
+ break;
+ default:
+ ABORT("pthread_kill failed");
+ }
+ }
+ }
+ }
+ #if DEBUG_THREADS
+ GC_printf0("World started\n");
+ #endif
+}
+
+void GC_stop_init() {
+ struct sigaction act;
+
+ if (sem_init(&GC_suspend_ack_sem, 0, 0) != 0)
+ ABORT("sem_init failed");
+
+ act.sa_flags = SA_RESTART;
+ if (sigfillset(&act.sa_mask) != 0) {
+ ABORT("sigfillset() failed");
+ }
+ GC_remove_allowed_signals(&act.sa_mask);
+ /* SIG_THR_RESTART is set in the resulting mask. */
+ /* It is unmasked by the handler when necessary. */
+ act.sa_handler = GC_suspend_handler;
+ if (sigaction(SIG_SUSPEND, &act, NULL) != 0) {
+ ABORT("Cannot set SIG_SUSPEND handler");
+ }
+
+ act.sa_handler = GC_restart_handler;
+ if (sigaction(SIG_THR_RESTART, &act, NULL) != 0) {
+ ABORT("Cannot set SIG_THR_RESTART handler");
+ }
+
+ /* Inititialize suspend_handler_mask. It excludes SIG_THR_RESTART. */
+ if (sigfillset(&suspend_handler_mask) != 0) ABORT("sigfillset() failed");
+ GC_remove_allowed_signals(&suspend_handler_mask);
+ if (sigdelset(&suspend_handler_mask, SIG_THR_RESTART) != 0)
+ ABORT("sigdelset() failed");
+
+ /* Check for GC_RETRY_SIGNALS. */
+ if (0 != GETENV("GC_RETRY_SIGNALS")) {
+ GC_retry_signals = TRUE;
+ }
+ if (0 != GETENV("GC_NO_RETRY_SIGNALS")) {
+ GC_retry_signals = FALSE;
+ }
+# ifdef CONDPRINT
+ if (GC_print_stats && GC_retry_signals) {
+ GC_printf0("Will retry suspend signal if necessary.\n");
+ }
+# endif
+}
+
+#endif
More information about the llvm-commits
mailing list