[llvm-dev] How to use LLVM-C to JIT-compile the floating-point identity function

Jan Wedekind via llvm-dev llvm-dev at lists.llvm.org
Fri Dec 8 13:53:49 PST 2017


Hi,
I am trying to JIT compile code using floating-point method arguments 
using the LLVM-C API. So far I have written the following code:

     #include <stdlib.h>
     #include <stdio.h>
     #include <llvm-c/Core.h>
     #include <llvm-c/Analysis.h>
     #include <llvm-c/ExecutionEngine.h>
     #include <llvm-c/Target.h>
     #include <llvm-c/Transforms/Scalar.h>


     int main (int argc, char const *argv[])
     {
       char *error = NULL;
       LLVMLinkInMCJIT();
       LLVMInitializeNativeTarget();
       LLVMInitializeNativeAsmPrinter();
       LLVMInitializeNativeAsmParser();

       LLVMModuleRef mod = LLVMModuleCreateWithName("minimal_module");
       LLVMTypeRef identity_args[] = { LLVMDoubleType() };
       LLVMValueRef identity = LLVMAddFunction(mod, "identity", LLVMFunctionType(LLVMDoubleType(), identity_args, 1, 0));
       LLVMSetFunctionCallConv(identity, LLVMCCallConv);
       LLVMValueRef n = LLVMGetParam(identity, 0);

       LLVMBasicBlockRef entry = LLVMAppendBasicBlock(identity, "entry");
       LLVMBuilderRef builder = LLVMCreateBuilder();
       LLVMPositionBuilderAtEnd(builder, entry);
       LLVMBuildRet(builder, n);

       LLVMVerifyModule(mod, LLVMAbortProcessAction, &error);
       LLVMDisposeMessage(error);

       LLVMExecutionEngineRef engine;
       error = NULL;
       if(LLVMCreateJITCompilerForModule(&engine, mod, 2, &error) != 0) {
         fprintf(stderr, "%s\n", error);
         LLVMDisposeMessage(error);
         abort();
       }

       LLVMDumpModule(mod);

       LLVMGenericValueRef exec_args[] = {LLVMCreateGenericValueOfFloat(LLVMDoubleType(), 1.25)};
       LLVMGenericValueRef exec_res = LLVMRunFunction(engine, identity, 1, exec_args);
       fprintf(stderr, "\n");
       fprintf(stderr, "; Running identity(%f) with JIT...\n", 1.25);
       fprintf(stderr, "; Result: %f\n", LLVMGenericValueToFloat(LLVMDoubleType(), exec_res));

       LLVMRemoveModule(engine, mod, &mod, &error);
       LLVMDisposeModule(mod);
       LLVMDisposeExecutionEngine(engine);
       LLVMDisposeBuilder(builder);
       return 0;
     }

The corresponding implementation using 32-bit integers works, i.e. 
identity(42) will return 42. However the floating-point version above 
returns 0.0. Above program generates the following output:

     ; ModuleID = 'minimal_module'
     source_filename = "minimal_module"
     target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128"

     define double @identity(double) {
     entry:
       ret double %0
     }

     ; Running identity(1.250000) with JIT...
     ; Result: 0.000000

I am using LLVM-3.9 under Debian Jessie on an AMD computer. Can anybody 
help me understand what I am doing wrong? The code is also here [1] and 
the working integer version is here [2].

Regards
Jan

[1] https://github.com/wedesoft/llvm-c-example/tree/double
[2] https://github.com/wedesoft/llvm-c-example


More information about the llvm-dev mailing list