[PATCH] D98146: OpaquePtr: Turn inalloca into a type attribute

David Blaikie via Phabricator via cfe-commits cfe-commits at lists.llvm.org
Mon Mar 8 14:37:39 PST 2021


dblaikie added inline comments.


================
Comment at: llvm/lib/IR/Attributes.cpp:490
     if (Type *Ty = getValueAsType()) {
-      raw_string_ostream OS(Result);
+      // FIXME: This should never be null
       Result += '(';
----------------
arsenm wrote:
> dblaikie wrote:
> > Is it? Could you replace this with an assertion instead? 
> It should never be null, but it is in some cases (I think there are some situations still where callsite attributes are missing byval)
Any chance of clarifying why it is sometimes null today in the comment?


================
Comment at: llvm/test/Bitcode/compatibility-3.6.ll:408-409
 ; CHECK: declare void @f.param.byval({ i8, i8 }* byval({ i8, i8 }))
-declare void @f.param.inalloca(i8* inalloca)
-; CHECK: declare void @f.param.inalloca(i8* inalloca)
+declare void @f.param.inalloca(i8* inalloca(i8))
+; CHECK: declare void @f.param.inalloca(i8* inalloca(i8))
 declare void @f.param.sret(i8* sret(i8))
----------------
arsenm wrote:
> dblaikie wrote:
> > It's confusing to me (though seems to be consistent with the byval change here - but maybe that's a mistake too) why the textual IR in this file is being modified - it has no effect (the textual IR is not used by the test) and the intent is to test compatibility with old IR (which is in the .bc file that is used) - so maybe it's best to leave the text here in a form that matches the old bitcode that the test is running against?
> > 
> > (similarly in the other compatibility tests updated in this patch)
> Why do these tests have textual IR in them at all then? I don't think the concept of the text IR corresponding to the old bitcode really exists. I would expect to see the current syntax, but I don't actually see why this has a .ll suffix and isn't just a plain text file with check lines
Seems useful to me to know what we're testing against - without the textual IR, how would we know what the CHECKs are meant to be demonstrating? But I do think the textual IR should be the old/original IR, not updated IR - so we can see the difference.

Not sure how far from my perspective these tests have already come - maybe it's only a few mistakes that have been repeated, maybe it's more systemic/a more fundamentally different idea about how these tests should be written that other developers have taken when making/maintaining these tests.


================
Comment at: llvm/test/Bitcode/inalloca-upgrade.test:1-7
+RUN: llvm-dis %p/Inputs/inalloca-upgrade.bc -o - | FileCheck %s
+
+Make sure we upgrade old-style IntAttribute inalloca records to a
+fully typed version correctly.
+
+CHECK: call void @bar({ i32*, i8 }* inalloca({ i32*, i8 }) %ptr)
+CHECK: invoke void @bar({ i32*, i8 }* inalloca({ i32*, i8 }) %ptr)
----------------
arsenm wrote:
> dblaikie wrote:
> > Isn't this tested by all the compatibility tests already? (since they contain the old-style inalloca, and verify that when read in it's upgraded to the new-style) Though I don't mind terribly having a separate test, but each file/tool execution does add up & it's nice to keep them to a minimum when test coverage isn't otherwise compromised.
> I was following along with byval, which has its own test like this. I also generally hate the all-in-one style of testing features that are really unrelated. It makes it harder to know what actually broke when one fails.
A single file with many tests for different features related to, say, bitcode compatibility, seems good to me.

If the failures are hard to read - there are various FileCheck features which can be used to help improve the failure modes (using CHECK-LABEL to isolate one checking area from another, so they don't bleed together/get cause the failure point to drift a long way from the initial problem).

Having every individual test in a separate file may significantly slow down test execution (especially on Windows, where process overhead is higher, if I recall correctly) and I think it reduces the chance of finding and grouping related tests together - leading to a proliferation of subtly overlapping tests rather than a more systematic approach to how a feature is tested. (though that's always challenging - different people think in different groupings, etc, naming is challenging, etc)


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D98146/new/

https://reviews.llvm.org/D98146



More information about the cfe-commits mailing list