[LLVMbugs] [Bug 20673] New: unneeded stores to stack left after memcmp optimization

bugzilla-daemon at llvm.org bugzilla-daemon at llvm.org
Fri Aug 15 08:15:22 PDT 2014


http://llvm.org/bugs/show_bug.cgi?id=20673

            Bug ID: 20673
           Summary: unneeded stores to stack left after memcmp
                    optimization
           Product: new-bugs
           Version: trunk
          Hardware: PC
                OS: All
            Status: NEW
          Severity: enhancement
          Priority: P
         Component: new bugs
          Assignee: unassignedbugs at nondot.org
          Reporter: llvm at insonuit.org
                CC: llvmbugs at cs.uiuc.edu
    Classification: Unclassified

Created attachment 12901
  --> http://llvm.org/bugs/attachment.cgi?id=12901&action=edit
Reduced test case for inlined small structure comparisons

clang replaces a small memcmp() operation with direct comparisons, which is
great.
But in a test case reduced from our product, there are cases where temporary
stores
remain after this optimization, and occasionally the comparison is made against
an
in-memory temporary rather than a register.

In our case, this occurs when working with a small (8-byte) structure.  It's
passed
in a register, so a direct register-to-register comparison is possible.  But
clang
is leaving a dead store-to-memory operation after the optimization, and
sometimes
comparing against memory instead.  We originally saw this with 3.4 and I
reproduced
it with trunk as of r214549 (and Apple LLVM 5.1).

I'll attach the full test file, but here's a simple example:

struct ss {
        uint16_t     f1;
        uint8_t      f2;
        uint8_t      f3;
        uint32_t     f4;
};

static inline bool
ss_equal(const struct ss a, const struct ss b)
{
        return (memcmp(&a, &b, sizeof(a)) == 0);
}

struct obj {
        int w;
    struct ss x;
};

int equal_test(struct obj *a, struct obj *b)
{
        return ss_equal(a->x, b->x);
}

This produces:

        movq    4(%rdi), %rax
        movq    4(%rsi), %rcx
        movq    %rax, -8(%rsp)
        movq    %rcx, -16(%rsp)
        cmpq    %rcx, -8(%rsp)
        sete    %al
        movzbl  %al, %eax
        retq

where we would hope for

        movq    4(%rdi), %rax
        cmpq    4(%rsi), %rax
        sete    %al
        movzbl  %al, %eax
        retq

We've used type-casting tricks to get the latter for now, but they're
non-well-defined
and we'd like to move toward using only standards-compliant, warning-free code.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-bugs/attachments/20140815/e51e0341/attachment.html>


More information about the llvm-bugs mailing list